title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 4. Feature enhancements | Chapter 4. Feature enhancements Cryostat 2.0 includes feature enhancements that build upon the Cryostat 1 offerings. Cryostat web console GUI You can access to the Cryostat information page through the Help icon in the upper-right corner of the Cryostat web console. On this console page, you can view your version of Cryostat. Additionally, an archived record's generated name now includes a target alias. The target alias improves record retrieval by linking an alias to the record's generated name. Cryostat Operator topology view The Cryostat Operator now applies the correct app.openshift.io/connects-to annotation to a Cryostat deployment. This configuration links the Cryostat deployment with the Cryostat Operator deployment in the topology view on the OpenShift Container Platform web console. Cryostat Operator controller manager If the Cryostat Operator failed to deploy Cryostat because you did not install cert-manager , the Cryostat Operator no longer hangs when it deletes Cryostat custom resources. Cryostat cluster ConsoleLink namespace In Cryostat 2.0, a Cryostat Operator cluster might create a duplicate ConsoleLink namespace for a Cryostat cluster. To avoid this issue, Cryostat Operator now replaces the GenerateName object with a SH256 hash value defined in Name object of the ConsoleLink resource definition. Custom event templates You can configure any Cryostat JFR recordings by using custom event templates. You can use custom event templates in the following ways: Instruct the Cryostat Operator to pre-configure Cryostat custom event templates by providing them with template files from stored ConfigMap objects. Upload custom event templates by using Cryostat web applications. Encode SSL/TLS certificates with supported formats You can add an SSL/TLS certificate on the Cryostat web console for your target JVM application. Valid SSL/TLS certificates are in DER-encoded base-64 or binary formats. Either format supports the following extensions: .der .cer .pem You can now specify a generated certificate at runtime, so that your target JVM application can use the SSL/TLS for JMX connections. Cryostat attempts to open a JMX connection to a target JVM that uses an SSL/TLS certificate. For a successful JMX connection, Cryostat must pass all its authentication checks on the target JVM certificate that you provided at runtime . You can use the POST handler to accept, validate and write the certificate. Fabric8 Kubernetes and OpenShift clients Cryostat 2.0 supports version 4.12.0 of Fabric8 Kubernetes and of OpenShift clients. This version enhances application compatibility with Cryostat and reduces downstream build errors. Grafana container version Cryostat 2.0 replaces version 6.4.4 of the Grafana container with version 7.3.6. Grafana error messages The 502 error message now relates to a failed JMX connection, while the 512 error messages relates to invalid responses sent to the JFR container. Health check resource definition Cryostat Operator replaces the api/v1/clienturl resource definition with the health resource definition. The Cryostat Operator now uses the health endpoint when performing containerized JVM health checks. initialization-resource annotation Cryostat Operator now includes an initialization-resource annotation in its CSV file's configuration. This annotation enhances the Cryostat Operator instance running on the OpenShift Container Platform web console by providing you with graphical hints to create a Cryostat custom resource for your Cryostat cluster. OLM bundle descriptors For Cryostat 2.0, an OLM bundle no longer requires setting an integer value in its eventOptions descriptor for checking a JFR recording's duration. Instead, the OLM bundle now uses requires duration formatted values that are defined in its EventOptions string. Supported duration units include s , m , and h . For example, 2h15m denotes a JFR recording length of 2 hours and 15 minutes. Security context constraint (SCC) The Cryostat Operator now defaults to using the restricted SCC setting. A pod contained in a Cryostat cluster can now use any permitted fsGroup value listed under the restricted SCC setting. Meaning that Cryostat pods mounted to a persistent storage (PV) can now have read/write access to their directories. A pod's read/write access level depends on the pod's fsGroup GID value, which the Cryostat Operator configures to adhere to the built-in restricted SecurityContextConstraint . A SecurityContext element contains pod-level security attributes. Before Cryostat 2.0, the Cryostat Operator was set to the default setting, which caused fsGroup access issues for a pod running in a Cryostat cluster For more information about the permitted range of fsGroup values for your Cryostat cluster, see About pre-allocated security context constraints values in the OpenShift documentation website. ServiceRef definition A ServiceRef definition includes the following new properties that Cryostat includes in all ServiceRef objects returned from GET communications with HTTP API handlers: Annotations, such as the Java application name, labels, or port number. User-specific ServiceRef values, such as an alias or connectURL . You can use the following two handlers create or delete a ServiceRef definition: POST , which creates a new ServiceRef if no existing targets with an identical service URL exist. DELETE , which removes any ServiceRef definitions from the CustomTargetPlatformClient if it matches the value specified in the targetID path parameter. Subprocess management Before Cryostat generates an automated analysis report, Cryostat creates a child subprocess. A child subprocess protects the parent process by accumulating any large memory loads consumed by the report generation. For example, on a Linux operating system the out-of-memory (OOM) Killer detects a process that tries to request additional memory that is not available from the system and stops the process. If a child subprocess exists, the OOM Killer stops this subprocess but does not interfere with the running parent process. You can set a minimum or maximum JVM heap size for a child subprocess by using the Cryostat environment variable: CRYOSTAT_REPORT_GENERATION_MAX_HEAP . Be aware that a low minimum value might stop a child subprocess before it generates a report, while a high maxumim value might cause additional memory constraints on the parent process. Subprocess report generation Cryostat 2.0 provides an enhancement to using either ActiveRecordingReportCache or ArchivedRecordingReportCache to generate a subprocessed report for your Java application. These classes are simplified as follows: ActiveRecordingReportCache now automatically includes the path to disk recordings. ArchivedRecordingReportCache calls a utility method that uses the connection manager of a parent process to copy a record to a local disk file. The class passes the local path location to the subprocess. A subprocess no longer needs to complete the following tasks: Establish a JMX connection. Handle an SSL/TLS certificate. Handle JMX authorization credentials. The removal of these tasks improves a subprocess's workload in the following ways: Quicker end-to-end report generation. Reduction in code complexity so that the codebase is easier to maintain. Proper and secure handling of SSL/TLS authentication and JMX authentication mechanisms. truststore environment variable Cryostat 2.0 replaces TRUSTSTORE environment variable name with SSL_TRUSTSTORE_DIR . TargetConnectionManager handler The TargetConnectionManager now support concurrent connections by using a JMX connection timed cache value for any connections between a targeted request and Cryostat. By default, the timed cache is 90 seconds. Vertx server response management The Vertx server can now use a TimeoutHandler implementation to automatically end a delayed response message sent by a Cryostat request handler. This prevents a client from having to wait indefinitely for a response from a Cryostat request handler. The TimeoutHandler implementation throws a 500 error message when it detects a delayed response from a Cryostat request handler. WebSockets The WebSocket includes the following updates: WebSocket connection upgraded from a two-way interactive channel to a one-way push Notification Channel (NC). WebSocket Notification Channel changed from api/v1/command to api/v1/notfications , because the WebSocket now uses this channel for one-way push notifications. WebSocket can send events on the Notification Channel when you use recordings and event templates for analyzing your JFR data. When your WebSocket client connects to a one-way push NC, the client automatically receives information about actions performed by other connected clients on the same channel. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.0/cryostat-feature-enhancements._cryostat |
8.205. spice-server | 8.205. spice-server 8.205.1. RHBA-2013:1571 - spice-server bug fix and enhancement update Updated spice-server packages that fix a number of bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Simple Protocol for Independent Computing Environments ( SPICE ) is a remote display protocol for virtual environments. SPICE users can access a virtualized desktop or server from the local system or any system with network access to the server. SPICE is used in Red Hat Enterprise Linux for viewing virtualized guests running on the Kernel-based Virtual Machine (KVM) hypervisor or on Red Hat Enterprise Virtualization Hypervisors. Note The spice-server packages have been upgraded to upstream version 0.12.4, which provides a number of bug fixes and enhancements over the version. (BZ# 952671 ) Bug Fixes BZ# 823472 Data accessed from the main thread, which use most SPICE channels, could be accessed by threads of other channels, such as display and cursor channels. To protect the data, an assertion check has been added to the SPICE code. However, certain calls to the sound channel interface use the Virtual CPU (vCPU) thread. Previously, these calls were rejected by the assertion check causing the SPICE server and the Kernel-based Virtual Machine (KVM) hypervisor to abort. Such calls are harmless because KVM uses global mutual exclusion (mutex) for the vCPU and I/O threads. With this update, a warning is returned instead of aborting SPICE and KVM. BZ# 859027 When the client_migrate_info() function was called with the cert-host-subject option specified and then was called without the option, on the third call, the option was freed for the second time. This was because the pointer was not set to NULL after it was first freed during the second call. This behavior caused the SPICE server to terminate unexpectedly with a segmentation fault. The underlying source code has been modified and the pointer is set to NULL when the cert-host-subject option is not specified. As a result, the pointer is freed only once and SPICE no longer crashes in the described scenario. BZ# 918169 When two items were to be sent to a client and the client became disconnected, the first item was cleared successfully but the second one was not. Consequently, the SPICE server terminated unexpectedly due an assertion check failure. This update applies a patch to fix this bug so that the second item is now properly cleared, too. As a result, the SPICE server no longer crashes in the described scenario. BZ# 918472 Due to a bug in the SPICE source code, an attempt to run the getaddrinfo() function failed with a segmentation fault. Consequently, Quick Emulator (QEMU) terminated unexpectedly. The underlying source code has been modified and QEMU no longer crashes when executing getaddrinfo() . BZ# 950029 When the SPICE source server was streaming video data during a migration process, the SPICE server could send stream-related messages to the SPICE client after sending a MSG_MIGRATE message. This is not allowed and the client thus forwarded a wrong message instead of a MSG_MIGRATE_DATA message to the destination host. The destination host then aborted the migration. This update modifies the SPICE server code to ensure that only the MSG_MIGRATE_DATA message can be sent after sending MSG_MIGRATE and the migration process now successfully finish. BZ# 952666 Previously, the SPICE server did not allow creation of a surface with the " stride >= 0 " path because the path was untested and it was not requested before by any QXL driver. Consequently, when a QXL driver attempted to create such a surface, SPICE terminated unexpectedly with an error on certain systems. The underlying source code has been modified to allow creation of the surface with the " stride >= 0 " path. As a result, the SPICE server no longer crashes in the described scenario. BZ# 956345 Under certain circumstances, the SPICE server could abort upon a virtual machine (VM) migration. This could happen if the VM was being migrated to a new host after the migration to the current host within the same SPICE client session. Then, if the connection between the original host and the client was a low bandwidth connection, the new host passed an incorrect connection bandwidth value to the SPICE client causing the SPICE server to abort. This update provides a patch addressing this problem and the SPICE server now sends the correct connection bandwidth value in this scenario. BZ# 958276 Previously, the destination host did not send its multi-media time to a client during migration so that the client held the multi-media time of the source server. As a consequence, if the source and destination hosts had different multi-media time and no audio playback, video frames that were created after the migration were dropped by the client. This update applies a patch to fix this bug and video frames are no longer dropped in the described scenario. BZ# 977998 Previously, an incorrect flag that was set when sending bitmaps caused an endless loop in the client display channel. This behavior occurred especially under limited bandwidth conditions. Consequently, the SPICE server could become unresponsive. The underlying source code has been modified to fix this bug and SPICE no longer hangs in such a situation. BZ# 977998 Previously, the waiting timeout period for a client response was set to 150 seconds. This duration was too long and caused, under certain circumstances, server errors to be returned. With this update, the waiting timeout period was set to 30 seconds to prevent the server errors from occurring. Enhancements BZ# 961848 With this enhancement, support for the new QEMU disable-agent-file-transfer option has been provided. As a result, users can now filter out the file transfer messages. BZ# 978403 This update introduces the adaptive video streaming that provides better video quality. With this feature, the network bandwidth and latency are estimated and the video bit-rate and the playback latency are dynamically adjusted. Moreover, the bandwidth and latency estimations of past video playback are used for improving the initial parameters setting of future video playbacks. All spice-server users are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/spice-server |
Chapter 7. Configure storage for OpenShift Container Platform services | Chapter 7. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 7.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 7.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 7.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 7.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 7.3. Persistent Volume Claims attached to prometheus-k8s-* pod 7.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 7.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 7.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 7.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. | [
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/configure_storage_for_openshift_container_platform_services |
Operators | Operators OpenShift Container Platform 4.11 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/operators/index |
15.2. Overview of the IBM Z Installation Procedure | 15.2. Overview of the IBM Z Installation Procedure You can install Red Hat Enterprise Linux on IBM Z interactively or in unattended mode. Installation on IBM Z differs from installation on other architectures in that it is typically performed over a network and not from a local media. The installation consists of three phases: Booting the Installation Connect with the mainframe, then perform an initial program load (IPL), or boot, from the medium containing the installation program. See Chapter 16, Booting the Installation on IBM Z for details. Connecting to the installation system From a local machine, connect to the remote IBM Z system to continue with the installation process. See Chapter 17, Connecting to the installation system for details. Anaconda Use the Anaconda installation program to configure network, specify language support, installation source, software packages to be installed, and to perform the rest of the installation. See Chapter 18, Installing Using Anaconda for more information. 15.2.1. Booting the Installation After establishing a connection with the mainframe, you need to perform an initial program load (IPL), or boot, from the medium containing the installation program. This document describes the most common methods of installing Red Hat Enterprise Linux on IBM Z. In general, you can use any method to boot the Linux installation system, which consists of a kernel ( kernel.img ) and initial RAM disk ( initrd.img ) with at least the parameters in the generic.prm file. Additionally, a generic.ins file is loaded which determines file names and memory addresses for the initrd, kernel and generic.prm. The Linux installation system is also called the installation program in this book. The control point from where you can start the IPL process depends on the environment where your Linux is to run. If your Linux is to run as a z/VM guest operating system, the control point is the control program (CP) of the hosting z/VM. If your Linux is to run in LPAR mode, the control point is the mainframe's Support Element (SE) or an attached IBM Z Hardware Management Console (HMC). You can use the following boot media only if Linux is to run as a guest operating system under z/VM: z/VM reader - see Section 16.3.1, "Using the z/VM Reader" for details. You can use the following boot media only if Linux is to run in LPAR mode: SE or HMC through a remote FTP server - see Section 16.4.1, "Using an FTP Server" for details. SE or HMC DVD - see Section 16.4.4, "Using an FCP-attached SCSI DVD Drive" for details. You can use the following boot media for both z/VM and LPAR: DASD - see Section 16.3.2, "Using a Prepared DASD" for z/VM or Section 16.4.2, "Using a Prepared DASD" for LPAR. SCSI device that is attached through an FCP channel - see Section 16.3.3, "Using a Prepared FCP-attached SCSI Disk" for z/VM or Section 16.4.3, "Using a Prepared FCP-attached SCSI Disk" for LPAR. FCP-attached SCSI DVD - see Section 16.3.4, "Using an FCP-attached SCSI DVD Drive" for z/VM or Section 16.4.4, "Using an FCP-attached SCSI DVD Drive" for LPAR If you use DASD and FCP-attached SCSI devices (except SCSI DVDs) as boot media, you must have a configured zipl boot loader. 15.2.2. Connecting to the installation system From a local machine, connect to the remote IBM Z system to continue with the installation process. See Chapter 17, Connecting to the installation system for details. 15.2.3. Installation using Anaconda In the second installation phase, you will use the Anaconda installation program in graphical, text-based, or command-line mode: Graphical Mode Graphical installation is done through a VNC client. You can use your mouse and keyboard to navigate through the screens, click buttons, and type into text fields. For more information on performing a graphical installation using VNC, see Chapter 25, Using VNC . Text-based Mode This interface does not offer all interface elements of the GUI and does not support all settings. Use this for interactive installations if you cannot use a VNC client. For more information about text-based installations, see Section 18.4, "Installing in Text Mode" . Command-line Mode This is intended for automated and non-interactive installations on IBM Z. Note that if the installation program encounters an invalid or missing kickstart command, the system will reboot. For more information about automated installation, see Chapter 27, Kickstart Installations . In Red Hat Enterprise Linux 7 the text-based installation has been reduced to minimize user interaction. Features like installation on FCP-attached SCSI devices, customizing partition layout, or package add-on selection are only available with the graphical user interface installation. Use the graphical installation whenever possible. See Chapter 18, Installing Using Anaconda for more details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-overview-s390 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/proc_providing-feedback-on-red-hat-documentation |
Chapter 4. User-managed encryption for IBM Cloud | Chapter 4. User-managed encryption for IBM Cloud By default, provider-managed encryption is used to secure the following when you deploy an OpenShift Container Platform cluster: The root (boot) volume of control plane and compute machines Persistent volumes (data volumes) that are provisioned after the cluster is deployed You can override the default behavior by specifying an IBM(R) Key Protect for IBM Cloud(R) (Key Protect) root key as part of the installation process. When you bring our own root key, you modify the installation configuration file ( install-config.yaml ) to specify the Cloud Resource Name (CRN) of the root key by using the encryptionKey parameter. You can specify that: The same root key be used be used for all cluster machines. You do so by specifying the key as part of the cluster's default machine configuration. When specified as part of the default machine configuration, all managed storage classes are updated with this key. As such, data volumes that are provisioned after the installation are also encrypted using this key. Separate root keys be used for the control plane and compute machine pools. For more information about the encryptionKey parameter, see Additional IBM Cloud configuration parameters . Note Make sure you have integrated Key Protect with your IBM Cloud Block Storage service. For more information, see the Key Protect documentation . 4.1. steps Install an OpenShift Container Platform cluster: Installing a cluster on IBM Cloud with customizations Installing a cluster on IBM Cloud with network customizations Installing a cluster on IBM Cloud into an existing VPC Installing a private cluster on IBM Cloud | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_cloud/user-managed-encryption-ibm-cloud |
20.5. Creating a New Quota Policy | 20.5. Creating a New Quota Policy You have enabled quota mode, either in Audit or Enforcing mode. You want to define a quota policy to manage resource usage in your data center. Creating a New Quota Policy Click Administration Quota . Click Add . Fill in the Name and Description fields. Select a Data Center . In the Memory & CPU section, use the green slider to set Cluster Threshold . In the Memory & CPU section, use the blue slider to set Cluster Grace . Click the All Clusters or the Specific Clusters radio button. If you select Specific Clusters , select the check box of the clusters that you want to add a quota policy to. Click Edit to open the Edit Quota window. Under the Memory field, select either the Unlimited radio button (to allow limitless use of Memory resources in the cluster), or select the limit to radio button to set the amount of memory set by this quota. If you select the limit to radio button, input a memory quota in megabytes (MB) in the MB field. Under the CPU field, select either the Unlimited radio button or the limit to radio button to set the amount of CPU set by this quota. If you select the limit to radio button, input a number of vCPUs in the vCpus field. Click OK in the Edit Quota window. In the Storage section, use the green slider to set Storage Threshold . In the Storage section, use the blue slider to set Storage Grace . Click the All Storage Domains or the Specific Storage Domains radio button. If you select Specific Storage Domains , select the check box of the storage domains that you want to add a quota policy to. Click Edit to open the Edit Quota window. Under the Storage Quota field, select either the Unlimited radio button (to allow limitless use of Storage) or the limit to radio button to set the amount of storage to which quota will limit users. If you select the limit to radio button, input a storage quota size in gigabytes (GB) in the GB field. Click OK in the Edit Quota window. Click OK in the New Quota window. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/creating_a_new_quota_policy |
Chapter 2. Setting up and configuring NGINX | Chapter 2. Setting up and configuring NGINX NGINX is a high performance and modular server that you can use, for example, as a: Web server Reverse proxy Load balancer This section describes how to NGINX in these scenarios. 2.1. Installing and preparing NGINX Red Hat uses Application Streams to provide different versions of NGINX. You can do the following: Select a stream and install NGINX Open the required ports in the firewall Enable and start the nginx service Using the default configuration, NGINX runs as a web server on port 80 and provides content from the /usr/share/nginx/html/ directory. Prerequisites RHEL 8 is installed. The host is subscribed to the Red Hat Customer Portal. The firewalld service is enabled and started Procedure Display the available NGINX module streams: If you want to install a different stream than the default, select the stream: Install the nginx package: Open the ports on which NGINX should provide its service in the firewall. For example, to open the default ports for HTTP (port 80) and HTTPS (port 443) in firewalld , enter: Enable the nginx service to start automatically when the system boots: Optional: Start the nginx service: If you do not want to use the default configuration, skip this step, and configure NGINX accordingly before you start the service. Important The PHP module requires a specific NGINX version. Using an incompatible version can cause conflicts when upgrading to a newer NGNIX stream. When using PHP 7.2 stream and NGNIX 1.24 stream, you can resolve this issue by enabling a newer PHP stream 7.4 before installing NGINX. Verification Use the yum utility to verify that the nginx package is installed: Ensure that the ports on which NGINX should provide its service are opened in the firewalld: Verify that the nginx service is enabled: Additional resources For details about Subscription Manager, see the Subscription Manager . For further details about Application Streams, modules, and installing packages, see the Installing, managing, and removing user-space components guide. For details about configuring firewalls, see the Securing networks guide. 2.2. Configuring NGINX as a web server that provides different content for different domains By default, NGINX acts as a web server that provides the same content to clients for all domain names associated with the IP addresses of the server. This procedure explains how to configure NGINX: To serve requests to the example.com domain with content from the /var/www/example.com/ directory To serve requests to the example.net domain with content from the /var/www/example.net/ directory To serve all other requests, for example, to the IP address of the server or to other domains associated with the IP address of the server, with content from the /usr/share/nginx/html/ directory Prerequisites NGINX is installed Clients and the web server resolve the example.com and example.net domain to the IP address of the web server. Note that you must manually add these entries to your DNS server. Procedure Edit the /etc/nginx/nginx.conf file: By default, the /etc/nginx/nginx.conf file already contains a catch-all configuration. If you have deleted this part from the configuration, re-add the following server block to the http block in the /etc/nginx/nginx.conf file: These settings configure the following: The listen directive define which IP address and ports the service listens. In this case, NGINX listens on port 80 on both all IPv4 and IPv6 addresses. The default_server parameter indicates that NGINX uses this server block as the default for requests matching the IP addresses and ports. The server_name parameter defines the host names for which this server block is responsible. Setting server_name to _ configures NGINX to accept any host name for this server block. The root directive sets the path to the web content for this server block. Append a similar server block for the example.com domain to the http block: The access_log directive defines a separate access log file for this domain. The error_log directive defines a separate error log file for this domain. Append a similar server block for the example.net domain to the http block: Create the root directories for both domains: Set the httpd_sys_content_t context on both root directories: These commands set the httpd_sys_content_t context on the /var/www/example.com/ and /var/www/example.net/ directories. Note that you must install the policycoreutils-python-utils package to run the restorecon commands. Create the log directories for both domains: Restart the nginx service: Verification Create a different example file in each virtual host's document root: Use a browser and connect to http://example.com . The web server shows the example content from the /var/www/example.com/index.html file. Use a browser and connect to http://example.net . The web server shows the example content from the /var/www/example.net/index.html file. Use a browser and connect to http:// IP_address_of_the_server . The web server shows the example content from the /usr/share/nginx/html/index.html file. 2.3. Adding TLS encryption to an NGINX web server You can enable TLS encryption on an NGINX web server for the example.com domain. Prerequisites NGINX is installed. The private key is stored in the /etc/pki/tls/private/example.com.key file. For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA's documentation. The TLS certificate is stored in the /etc/pki/tls/certs/example.com.crt file. If you use a different path, adapt the corresponding steps of the procedure. The CA certificate has been appended to the TLS certificate file of the server. Clients and the web server resolve the host name of the server to the IP address of the web server. Port 443 is open in the local firewall. Procedure Edit the /etc/nginx/nginx.conf file, and add the following server block to the http block in the configuration: For security reasons, configure that only the root user can access the private key file: Warning If the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure. Restart the nginx service: Verification Use a browser and connect to https://example.com Additional resources Security considerations for TLS in RHEL 2.4. Configuring NGINX as a reverse proxy for the HTTP traffic You can configure the NGINX web server to act as a reverse proxy for HTTP traffic. For example, you can use this functionality to forward requests to a specific subdirectory on a remote server. From the client perspective, the client loads the content from the host it accesses. However, NGINX loads the actual content from the remote server and forwards it to the client. This procedure explains how to forward traffic to the /example directory on the web server to the URL https://example.com . Prerequisites NGINX is installed as described in Installing and preparing NGINX . Optional: TLS encryption is enabled on the reverse proxy. Procedure Edit the /etc/nginx/nginx.conf file and add the following settings to the server block that should provide the reverse proxy: The location block defines that NGINX passes all requests in the /example directory to https://example.com . Set the httpd_can_network_connect SELinux boolean parameter to 1 to configure that SELinux allows NGINX to forward traffic: Restart the nginx service: Verification Use a browser and connect to http:// host_name /example and the content of https://example.com is shown. 2.5. Configuring NGINX as an HTTP load balancer You can use the NGINX reverse proxy feature to load-balance traffic. This procedure describes how to configure NGINX as an HTTP load balancer that sends requests to different servers, based on which of them has the least number of active connections. If both servers are not available, the procedure also defines a third host for fallback reasons. Prerequisites NGINX is installed as described in Installing and preparing NGINX . Procedure Edit the /etc/nginx/nginx.conf file and add the following settings: The least_conn directive in the host group named backend defines that NGINX sends requests to server1.example.com or server2.example.com , depending on which host has the least number of active connections. NGINX uses server3.example.com only as a backup in case that the other two hosts are not available. With the proxy_pass directive set to http://backend , NGINX acts as a reverse proxy and uses the backend host group to distribute requests based on the settings of this group. Instead of the least_conn load balancing method, you can specify: No method to use round robin and distribute requests evenly across servers. ip_hash to send requests from one client address to the same server based on a hash calculated from the first three octets of the IPv4 address or the whole IPv6 address of the client. hash to determine the server based on a user-defined key, which can be a string, a variable, or a combination of both. The consistent parameter configures that NGINX distributes requests across all servers based on the user-defined hashed key value. random to send requests to a randomly selected server. Restart the nginx service: 2.6. Additional resources For the official NGINX documentation see https://nginx.org/en/docs/ . Note that Red Hat does not maintain this documentation and that it might not work with the NGINX version you have installed. Configuring applications to use cryptographic hardware through PKCS #11 . | [
"yum module list nginx Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) Name Stream Profiles Summary nginx 1.14 [d] common [d] nginx webserver nginx 1.16 common [d] nginx webserver Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled",
"yum module enable nginx: stream_version",
"yum install nginx",
"firewall-cmd --permanent --add-port={80/tcp,443/tcp} firewall-cmd --reload",
"systemctl enable nginx",
"systemctl start nginx",
"yum list installed nginx Installed Packages nginx.x86_64 1:1.14.1-9.module+el8.0.0+4108+af250afe @rhel-8-for-x86_64-appstream-rpms",
"firewall-cmd --list-ports 80/tcp 443/tcp",
"systemctl is-enabled nginx enabled",
"server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; }",
"server { server_name example.com; root /var/www/example.com/; access_log /var/log/nginx/example.com/access.log; error_log /var/log/nginx/example.com/error.log; }",
"server { server_name example.net; root /var/www/example.net/; access_log /var/log/nginx/example.net/access.log; error_log /var/log/nginx/example.net/error.log; }",
"mkdir -p /var/www/example.com/ mkdir -p /var/www/example.net/",
"semanage fcontext -a -t httpd_sys_content_t \"/var/www/example.com(/.*)?\" restorecon -Rv /var/www/example.com/ semanage fcontext -a -t httpd_sys_content_t \"/var/www/example.net(/.\\*)?\" restorecon -Rv /var/www/example.net/",
"mkdir /var/log/nginx/example.com/ mkdir /var/log/nginx/example.net/",
"systemctl restart nginx",
"echo \"Content for example.com\" > /var/www/example.com/index.html echo \"Content for example.net\" > /var/www/example.net/index.html echo \"Catch All content\" > /usr/share/nginx/html/index.html",
"server { listen 443 ssl; server_name example.com; root /usr/share/nginx/html; ssl_certificate /etc/pki/tls/certs/example.com.crt; ssl_certificate_key /etc/pki/tls/private/example.com.key; }",
"chown root:root /etc/pki/tls/private/example.com.key chmod 600 /etc/pki/tls/private/example.com.key",
"systemctl restart nginx",
"location /example { proxy_pass https://example.com; }",
"setsebool -P httpd_can_network_connect 1",
"systemctl restart nginx",
"http { upstream backend { least_conn; server server1.example.com ; server server2.example.com ; server server3.example.com backup; } server { location / { proxy_pass http:// backend ; } } }",
"systemctl restart nginx"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/setting-up-and-configuring-nginx_deploying-different-types-of-servers |
Chapter 2. Your path to secure application development | Chapter 2. Your path to secure application development Red Hat Trusted Application Pipeline (RHTAP) significantly enhances the efficiency of containerizing and deploying applications, enabling developers to deploy their work within minutes. This innovative platform not only facilitates the creation of a build pipeline for testing and integrating application changes swiftly but also fortifies security measures against supply-chain attacks. By adhering to the rigorous standards of the Supply-chain Levels for Software Artifacts (SLSA) security framework, RHTAP ensures compliance with high-level security requirements. 2.1. Installation overview Before tapping into the vast array of benefits offered by RHTAP, the initial step involves its installation within your organization. The installation of RHTAP is structured around seven key procedures: Creating a GitHub application for RHTAP Forking the template catalog Creating a GitOps git token Creating the Docker configuration value Creating a private-values.yaml file Installing RHTAP in your cluster Finalizing your GitHub application 2.2. Initial setup Prior to beginning the installation process, certain prerequisites must be met to ensure a smooth and successful setup: Cluster Access : Ensure you have ClusterAdmin access to an OpenShift Container Platform (OCP) cluster, accessible both via the CLI and the web console. Red Hat Advanced Cluster Security (ACS) : Obtain necessary values from your ACS instance, including: ACS API token: Follow the instructions provided here to create an API token. ACS central endpoint URL: Configure the endpoint by referring to the instructions available here . Configure ACS for Private Repositories : If you're using private repositories in image registries like Quay.io, configure ACS accordingly: For Quay.io, navigate to Integrations > Image Integrations and select the Quay.io card. Add your OAuth tokens to access your specific Quay.io instance. Validate access via the test button to ensure ACS can scan private images when required. Quay.io Account : Ensure you have an active Quay.io account. Helm CLI Tool : Install the Helm CLI tool by following the guidelines provided here . GitHub Account : Lastly, make sure you have a GitHub account to facilitate certain installation procedures. With these prerequisites in place, you are well-prepared to initiate the installation process by creating a new GitHub application specifically for your RHTAP instance. step Install Red Hat Trusted Application Pipeline | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/getting_started_with_red_hat_trusted_application_pipeline/installing-red-hat-trusted-application-pipeline_default |
Chapter 1. All-in-one Red Hat OpenStack Platform installation | Chapter 1. All-in-one Red Hat OpenStack Platform installation The all-in-one installation method uses TripleO to deploy Red Hat OpenStack Platform and related services with a simple, single-node environment. Use this installation to enable proof-of-concept, development, and test deployments on a single node with limited or no follow-up operations. Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . 1.1. Prerequisites Your system must have a Red Hat Enterprise Linux 9.0 base operating system installed. Your system must have two network interfaces so that internet connectivity is not disrupted while TripleO configures the second interface. Your system must have 4 CPUs, 8GB RAM, and 30GB disk space. Example network configuration Interface eth0 assigned to the default network 192.168.122.0/24. Use this interface for general connectivity. This interface must have internet access. Interface eth1 assigned to the management network 192.168.25.0/24. TripleO uses this interface for the OpenStack services. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/standalone_deployment_guide/all-in-one-openstack-installation |
Chapter 17. Day 2 operations for telco core CNF clusters | Chapter 17. Day 2 operations for telco core CNF clusters 17.1. Upgrading telco core CNF clusters 17.1.1. Upgrading a telco core CNF cluster OpenShift Container Platform has long term support or extended update support (EUS) on all even releases and update paths between EUS releases. You can update from one EUS version to the EUS version. It is also possible to update between y-stream and z-stream versions. 17.1.1.1. Cluster updates for telco core CNF clusters Updating your cluster is a critical task that ensures that bugs and potential security vulnerabilities are patched. Often, updates to cloud-native network functions (CNF) require additional functionality from the platform that comes when you update the cluster version. You also must update the cluster periodically to ensure that the cluster platform version is supported. You can minimize the effort required to stay current with updates by keeping up-to-date with EUS releases and upgrading to select important z-stream releases only. Note The update path for the cluster can vary depending on the size and topology of the cluster. The update procedures described here are valid for most clusters from 3-node clusters up to the largest size clusters certified by the telco scale team. This includes some scenarios for mixed-workload clusters. The following update scenarios are described: Control Plane Only updates Y-stream updates Z-stream updates Important Control Plane Only updates were previously known as EUS-to-EUS updates. Control Plane Only updates are only viable between even-numbered minor versions of OpenShift Container Platform. 17.1.2. Verifying cluster API versions between update versions APIs change over time as components are updated. It is important to verify that cloud-native network function (CNF) APIs are compatible with the updated cluster version. 17.1.2.1. OpenShift Container Platform API compatibility When considering what z-stream release to update to as part of a new y-stream update, you must be sure that all the patches that are in the z-stream version you are moving from are in the new z-stream version. If the version you update to does not have all the required patches, the built-in compatibility of Kubernetes is broken. For example, if the cluster version is 4.15.32, you must update to 4.16 z-stream release that has all of the patches that are applied to 4.15.32. 17.1.2.1.1. About Kubernetes version skew Each cluster Operator supports specific API versions. Kubernetes APIs evolve over time, and newer versions can be deprecated or change existing APIs. This is referred to as "version skew". For every new release, you must review the API changes. The APIs might be compatible across several releases of an Operator, but compatibility is not guaranteed. To mitigate against problems that arise from version skew, follow a well-defined update strategy. Additional resources Understanding API tiers Kubernetes version skew policy 17.1.2.2. Determining the cluster version update path Use the Red Hat OpenShift Container Platform Update Graph tool to determine if the path is valid for the z-stream release you want to update to. Verify the update with your Red Hat Technical Account Manager to ensure that the update path is valid for telco implementations. Important The <4.y+1.z> or <4.y+2.z> version that you update to must have the same patch level as the <4.y.z> release you are updating from. The OpenShift update process mandates that if a fix is present in a specific <4.y.z> release, then the that fix must be present in the <4.y+1.z> release that you update to. Figure 17.1. Bug fix backporting and the update graph Important OpenShift development has a strict backport policy that prevents regressions. For example, a bug must be fixed in 4.16.z before it is fixed in 4.15.z. This means that the update graph does not allow for updates to chronologically older releases even if the minor version is greater, for example, updating from 4.15.24 to 4.16.2. Additional resources Understanding update channels and releases 17.1.2.3. Selecting the target release Use the Red Hat OpenShift Container Platform Update Graph or the cincinnati-graph-data repository to determine what release to update to. 17.1.2.3.1. Determining what z-stream updates are available Before you can update to a new z-stream release, you need to know what versions are available. Note You do not need to change the channel when performing a z-stream update. Procedure Determine which z-stream releases are available. Run the following command: USD oc adm upgrade Example output Cluster version is 4.14.34 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.14, candidate-4.15, eus-4.14, eus-4.16, fast-4.14, fast-4.15, stable-4.14, stable-4.15) Recommended updates: VERSION IMAGE 4.14.37 quay.io/openshift-release-dev/ocp-release@sha256:14e6ba3975e6c73b659fa55af25084b20ab38a543772ca70e184b903db73092b 4.14.36 quay.io/openshift-release-dev/ocp-release@sha256:4bc4925e8028158e3f313aa83e59e181c94d88b4aa82a3b00202d6f354e8dfed 4.14.35 quay.io/openshift-release-dev/ocp-release@sha256:883088e3e6efa7443b0ac28cd7682c2fdbda889b576edad626769bf956ac0858 17.1.2.3.2. Changing the channel for a Control Plane Only update You must change the channel to the required version for a Control Plane Only update. Note You do not need to change the channel when performing a z-stream update. Procedure Determine the currently configured update channel: USD oc get clusterversion -o=jsonpath='{.items[*].spec}' | jq Example output { "channel": "stable-4.14", "clusterID": "01eb9a57-2bfb-4f50-9d37-dc04bd5bac75" } Change the channel to point to the new channel you want to update to: USD oc adm upgrade channel eus-4.16 Confirm the updated channel: USD oc get clusterversion -o=jsonpath='{.items[*].spec}' | jq Example output { "channel": "eus-4.16", "clusterID": "01eb9a57-2bfb-4f50-9d37-dc04bd5bac75" } 17.1.2.3.2.1. Changing the channel for an early EUS to EUS update The update path to a brand new release of OpenShift Container Platform is not available in either the EUS channel or the stable channel until 45 to 90 days after the initial GA of a minor release. To begin testing an update to a new release, you can use the fast channel. Procedure Change the channel to fast-<y+1> . For example, run the following command: USD oc adm upgrade channel fast-4.16 Check the update path from the new channel. Run the following command: USD oc adm upgrade Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.28 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6958394 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.15, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:6618dd3c0f5 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:7a72abc3 4.16.12 quay.io/openshift-release-dev/ocp-release@sha256:1c8359fc2 4.16.11 quay.io/openshift-release-dev/ocp-release@sha256:bc9006febfe 4.16.10 quay.io/openshift-release-dev/ocp-release@sha256:dece7b61b1 4.15.36 quay.io/openshift-release-dev/ocp-release@sha256:c31a56d19 4.15.35 quay.io/openshift-release-dev/ocp-release@sha256:f21253 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:2dd69c5 Follow the update procedure to get to version 4.16 (<y+1> from version 4.15) Note You can keep your worker nodes paused between EUS releases even if you are using the fast channel. When you get to the required <y+1> release, change the channel again, this time to fast-<y+2> . Follow the EUS update procedure to get to the required <y+2> release. 17.1.2.3.3. Changing the channel for a y-stream update In a y-stream update you change the channel to the release channel. Note Use the stable or EUS release channels for production clusters. Procedure Change the update channel: USD oc adm upgrade channel stable-4.15 Check the update path from the new channel. Run the following command: USD oc adm upgrade Example output Cluster version is 4.14.34 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.27 and therefore OpenShift 4.15 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6958394 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.15 (available channels: candidate-4.14, candidate-4.15, eus-4.14, eus-4.15, fast-4.14, fast-4.15, stable-4.14, stable-4.15) Recommended updates: VERSION IMAGE 4.15.33 quay.io/openshift-release-dev/ocp-release@sha256:7142dd4b560 4.15.32 quay.io/openshift-release-dev/ocp-release@sha256:cda8ea5b13dc9 4.15.31 quay.io/openshift-release-dev/ocp-release@sha256:07cf61e67d3eeee 4.15.30 quay.io/openshift-release-dev/ocp-release@sha256:6618dd3c0f5 4.15.29 quay.io/openshift-release-dev/ocp-release@sha256:7a72abc3 4.15.28 quay.io/openshift-release-dev/ocp-release@sha256:1c8359fc2 4.15.27 quay.io/openshift-release-dev/ocp-release@sha256:bc9006febfe 4.15.26 quay.io/openshift-release-dev/ocp-release@sha256:dece7b61b1 4.14.38 quay.io/openshift-release-dev/ocp-release@sha256:c93914c62d7 4.14.37 quay.io/openshift-release-dev/ocp-release@sha256:c31a56d19 4.14.36 quay.io/openshift-release-dev/ocp-release@sha256:f21253 4.14.35 quay.io/openshift-release-dev/ocp-release@sha256:2dd69c5 17.1.3. Preparing the telco core cluster platform for update Typically, telco clusters run on bare-metal hardware. Often you must update the firmware to take on important security fixes, take on new functionality, or maintain compatibility with the new release of OpenShift Container Platform. 17.1.3.1. Ensuring the host firmware is compatible with the update You are responsible for the firmware versions that you run in your clusters. Updating host firmware is not a part of the OpenShift Container Platform update process. It is not recommended to update firmware in conjunction with the OpenShift Container Platform version. Important Hardware vendors advise that it is best to apply the latest certified firmware version for the specific hardware that you are running. For telco use cases, always verify firmware updates in test environments before applying them in production. The high throughput nature of telco CNF workloads can be adversely affected by sub-optimal host firmware. You should thoroughly test new firmware updates to ensure that they work as expected with the current version of OpenShift Container Platform. Ideally, you test the latest firmware version with the target OpenShift Container Platform update version. 17.1.3.2. Ensuring that layered products are compatible with the update Verify that all layered products run on the version of OpenShift Container Platform that you are updating to before you begin the update. This generally includes all Operators. Procedure Verify the currently installed Operators in the cluster. For example, run the following command: USD oc get csv -A Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE gitlab-operator-kubernetes.v0.17.2 GitLab 0.17.2 gitlab-operator-kubernetes.v0.17.1 Succeeded openshift-operator-lifecycle-manager packageserver Package Server 0.19.0 Succeeded Check that Operators that you install with OLM are compatible with the update version. Operators that are installed with the Operator Lifecycle Manager (OLM) are not part of the standard cluster Operators set. Use the Operator Update Information Checker to understand if you must update an Operator after each y-stream update or if you can wait until you have fully updated to the EUS release. Tip You can also use the Operator Update Information Checker to see what versions of OpenShift Container Platform are compatible with specific releases of an Operator. Check that Operators that you install outside of OLM are compatible with the update version. For all OLM-installed Operators that are not directly supported by Red Hat, contact the Operator vendor to ensure release compatibility. Some Operators are compatible with several releases of OpenShift Container Platform. You might not must update the Operators until after you complete the cluster update. See "Updating the worker nodes" for more information. See "Updating all the OLM Operators" for information about updating an Operator after performing the first y-stream control plane update. Additional resources Updating the worker nodes Updating all the OLM Operators 17.1.3.3. Applying MachineConfigPool labels to nodes before the update Prepare MachineConfigPool ( mcp ) node labels to group nodes together in groups of roughly 8 to 10 nodes. With mcp groups, you can reboot groups of nodes independently from the rest of the cluster. You use the mcp node labels to pause and unpause the set of nodes during the update process so that you can do the update and reboot at a time of your choosing. 17.1.3.3.1. Staggering the cluster update Sometimes there are problems during the update. Often the problem is related to hardware failure or nodes needing to be reset. Using mcp node labels, you can update nodes in stages by pausing the update at critical moments, tracking paused and unpaused nodes as you proceed. When a problem occurs, you use the nodes that are in an unpaused state to ensure that there are enough nodes running to keep all applications pods running. 17.1.3.3.2. Dividing worker nodes into MachineConfigPool groups How you divide worker nodes into mcp groups can vary depending on how many nodes are in the cluster or how many nodes you assign to a node role. By default the 2 roles in a cluster are control plane and worker. In clusters that run telco workloads, you can further split the worker nodes between CNF control plane and CNF data plane roles. Add mcp role labels that split the worker nodes into each of these two groups. Note Larger clusters can have as many as 100 worker nodes in the CNF control plane role. No matter how many nodes there are in the cluster, keep each MachineConfigPool group to around 10 nodes. This allows you to control how many nodes are taken down at a time. With multiple MachineConfigPool groups, you can unpause several groups at a time to accelerate the update, or separate the update over 2 or more maintenance windows. Example cluster with 15 worker nodes Consider a cluster with 15 worker nodes: 10 worker nodes are CNF control plane nodes. 5 worker nodes are CNF data plane nodes. Split the CNF control plane and data plane worker node roles into at least 2 mcp groups each. Having 2 mcp groups per role means that you can have one set of nodes that are not affected by the update. Example cluster with 6 worker nodes Consider a cluster with 6 worker nodes: Split the worker nodes into 3 mcp groups of 2 nodes each. Upgrade one of the mcp groups. Allow the updated nodes to sit through a day to allow for verification of CNF compatibility before completing the update on the other 4 nodes. Important The process and pace at which you unpause the mcp groups is determined by your CNF applications and configuration. If your CNF pod can handle being scheduled across nodes in a cluster, you can unpause several mcp groups at a time and set the MaxUnavailable in the mcp custom resource (CR) to as high as 50%. This allows up to half of the nodes in an mcp group to restart and get updated. 17.1.3.3.3. Reviewing configured cluster MachineConfigPool roles Review the currently configured MachineConfigPool roles in the cluster. Procedure Get the currently configured mcp groups in the cluster: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bere83 True False False 3 3 3 0 25d worker rendered-worker-245c4f True False False 2 2 2 0 25d Compare the list of mcp roles to list of nodes in the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 39d v1.27.15+6147456 worker-0 Ready worker 39d v1.27.15+6147456 worker-1 Ready worker 39d v1.27.15+6147456 Note When you apply an mcp group change, the node roles are updated. Determine how you want to separate the worker nodes into mcp groups. 17.1.3.3.4. Creating MachineConfigPool groups for the cluster Creating mcp groups is a 2-step process: Add an mcp label to the nodes in the cluster Apply an mcp CR to the cluster that organizes the nodes based on their labels Procedure Label the nodes so that they can be put into mcp groups. Run the following commands: USD oc label node worker-0 node-role.kubernetes.io/mcp-1= USD oc label node worker-1 node-role.kubernetes.io/mcp-2= The mcp-1 and mcp-2 labels are applied to the nodes. For example: Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 39d v1.27.15+6147456 worker-0 Ready mcp-1,worker 39d v1.27.15+6147456 worker-1 Ready mcp-2,worker 39d v1.27.15+6147456 Create YAML custom resources (CRs) that apply the labels as mcp CRs in the cluster. Save the following YAML in the mcps.yaml file: --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-2] } nodeSelector: matchLabels: node-role.kubernetes.io/mcp-2: "" --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-1] } nodeSelector: matchLabels: node-role.kubernetes.io/mcp-1: "" Create the MachineConfigPool resources: USD oc apply -f mcps.yaml Example output machineconfigpool.machineconfiguration.openshift.io/mcp-2 created Verification Monitor the MachineConfigPool resources as they are applied in the cluster. After you apply the mcp resources, the nodes are added into the new machine config pools. This takes a few minutes. Note The nodes do not reboot while being added into the mcp groups. The original worker and master mcp groups remain unchanged. Check the status of the new mcp resources: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-be3e83 True False False 3 3 3 0 25d mcp-1 rendered-mcp-1-2f4c4f False True True 1 0 0 0 10s mcp-2 rendered-mcp-2-2r4s1f False True True 1 0 0 0 10s worker rendered-worker-23fc4f False True True 0 0 0 2 25d Eventually, the resources are fully applied: NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-be3e83 True False False 3 3 3 0 25d mcp-1 rendered-mcp-1-2f4c4f True False False 1 1 1 0 7m33s mcp-2 rendered-mcp-2-2r4s1f True False False 1 1 1 0 51s worker rendered-worker-23fc4f True False False 0 0 0 0 25d Additional resources Performing a Control Plane Only update Factors affecting update duration Ensuring that CNF workloads run uninterrupted with pod disruption budgets Ensuring that pods do not run on the same cluster node 17.1.3.4. Telco deployment environment considerations In telco environments, most clusters are in disconnected networks. To update clusters in these environments, you must update your offline image repository. Additional resources API compatibility guidelines Mirroring images for a disconnected installation by using the oc-mirror plugin v2 17.1.3.5. Preparing the cluster platform for update Before you update the cluster, perform some basic checks and verifications to make sure that the cluster is ready for the update. Procedure Verify that there are no failed or in progress pods in the cluster by running the following command: USD oc get pods -A | grep -E -vi 'complete|running' Note You might have to run this command more than once if there are pods that are in a pending state. Verify that all nodes in the cluster are available: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 32d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 32d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 32d v1.27.15+6147456 worker-0 Ready mcp-1,worker 32d v1.27.15+6147456 worker-1 Ready mcp-2,worker 32d v1.27.15+6147456 Verify that all bare-metal nodes are provisioned and ready. USD oc get bmh -n openshift-machine-api Example output NAME STATE CONSUMER ONLINE ERROR AGE ctrl-plane-0 unmanaged cnf-58879-master-0 true 33d ctrl-plane-1 unmanaged cnf-58879-master-1 true 33d ctrl-plane-2 unmanaged cnf-58879-master-2 true 33d worker-0 unmanaged cnf-58879-worker-0-45879 true 33d worker-1 progressing cnf-58879-worker-0-dszsh false 1d 1 1 An error occurred while provisioning the worker-1 node. Verification Verify that all cluster Operators are ready: USD oc get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 17h baremetal 4.14.34 True False False 32d ... service-ca 4.14.34 True False False 32d storage 4.14.34 True False False 32d Additional resources Investigating pod issues 17.1.4. Configuring CNF pods before updating the telco core CNF cluster Follow the guidance in Red Hat best practices for Kubernetes when developing cloud-native network functions (CNFs) to ensure that the cluster can schedule pods during an update. Important Always deploy pods in groups by using Deployment resources. Deployment resources spread the workload across all of the available pods ensuring there is no single point of failure. When a pod that is managed by a Deployment resource is deleted, a new pod takes its place automatically. Additional resources Red Hat best practices for Kubernetes 17.1.4.1. Ensuring that CNF workloads run uninterrupted with pod disruption budgets You can configure the minimum number of pods in a deployment to allow the CNF workload to run uninterrupted by setting a pod disruption budget in a PodDisruptionBudget custom resource (CR) that you apply. Be careful when setting this value; setting it improperly can cause an update to fail. For example, if you have 4 pods in a deployment and you set the pod disruption budget to 4, the cluster scheduler keeps 4 pods running at all times - no pods can be scaled down. Instead, set the pod disruption budget to 2, letting 2 of the 4 pods be scheduled as down. Then, the worker nodes where those pods are located can be rebooted. Note Setting the pod disruption budget to 2 does not mean that your deployment runs on only 2 pods for a period of time, for example, during an update. The cluster scheduler creates 2 new pods to replace the 2 older pods. However, there is short period of time between the new pods coming online and the old pods being deleted. Additional resources Specifying the number of pods that must be up with pod disruption budgets Pod preemption and other scheduler settings 17.1.4.2. Ensuring that pods do not run on the same cluster node High availability in Kubernetes requires duplicate processes to be running on separate nodes in the cluster. This ensures that the application continues to run even if one node becomes unavailable. In OpenShift Container Platform, processes can be automatically duplicated in separate pods in a deployment. You configure anti-affinity in the Pod spec to ensure that the pods in a deployment do not run on the same cluster node. During an update, setting pod anti-affinity ensures that pods are distributed evenly across nodes in the cluster. This means that node reboots are easier during an update. For example, if there are 4 pods from a single deployment on a node, and the pod disruption budget is set to only allow 1 pod to be deleted at a time, then it will take 4 times as long for that node to reboot. Setting pod anti-affinity spreads pods across the cluster to prevent such occurrences. Additional resources Configuring a pod affinity rule 17.1.4.3. Application liveness, readiness, and startup probes You can use liveness, readiness and startup probes to check the health of your live application containers before you schedule an update. These are very useful tools to use with pods that are dependent upon keeping state for their application containers. Liveness health check Determines if a container is running. If the liveness probe fails for a container, the pod responds based on the restart policy. Readiness probe Determines if a container is ready to accept service requests. If the readiness probe fails for a container, the kubelet removes the container from the list of available service endpoints. Startup probe A startup probe indicates whether the application within a container is started. All other probes are disabled until the startup succeeds. If the startup probe does not succeed, the kubelet kills the container, and the container is subject to the pod restartPolicy setting. Additional resources Understanding health checks 17.1.5. Before you update the telco core CNF cluster Before you start the cluster update, you must pause worker nodes, back up the etcd database, and do a final cluster health check before proceeding. 17.1.5.1. Pausing worker nodes before the update You must pause the worker nodes before you proceed with the update. In the following example, there are 2 mcp groups, mcp-1 and mcp-2 . You patch the spec.paused field to true for each of these MachineConfigPool groups. Procedure Patch the mcp CRs to pause the nodes and drain and remove the pods from those nodes by running the following command: USD oc patch mcp/mcp-1 --type merge --patch '{"spec":{"paused":true}}' USD oc patch mcp/mcp-2 --type merge --patch '{"spec":{"paused":true}}' Get the status of the paused mcp groups: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 true mcp-2 true Note The default control plane and worker mcp groups are not changed during an update. 17.1.5.2. Backup the etcd database before you proceed with the update You must backup the etcd database before you proceed with the update. 17.1.5.2.1. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session as root for a control plane node: USD oc debug --as-root node/<node_name> Change your root directory to /host in the debug shell: sh-4.4# chroot /host If the cluster-wide proxy is enabled, export the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables by running the following commands: USD export HTTP_PROXY=http://<your_proxy.example.com>:8080 USD export HTTPS_PROXY=https://<your_proxy.example.com>:8080 USD export NO_PROXY=<example.com> Run the cluster-backup.sh script in the debug shell and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 17.1.5.2.2. Creating a single etcd backup Follow these steps to create a single etcd backup by creating and applying a custom resource (CR). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure If dynamically-provisioned storage is available, complete the following steps to create a single automated etcd backup: Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Verify the creation of the PVC by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s Note Dynamic PVCs stay in the Pending state until they are mounted. Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the PVC to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml If dynamically-provisioned storage is not available, complete the following steps to create a single automated etcd backup: Create a StorageClass CR file named etcd-backup-local-storage.yaml with the following contents: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate Apply the StorageClass CR by running the following command: USD oc apply -f etcd-backup-local-storage.yaml Create a PV named etcd-backup-pv-fs.yaml with contents such as the following example: apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2 1 The amount of storage available to the PV. Adjust this value for your requirements. 2 Replace this value with the node to attach this PV to. Verify the creation of the PV by running the following command: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s Create a PVC named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the persistent volume claim (PVC) to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml Additional resources Backing up etcd 17.1.5.3. Checking the cluster health You should check the cluster health often during the update. Check for the node status, cluster Operators status and failed pods. Procedure Check the status of the cluster Operators by running the following command: USD oc get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d22h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d22h config-operator 4.14.34 True False False 4d22h console 4.14.34 True False False 4d22h ... service-ca 4.14.34 True False False 4d22h storage 4.14.34 True False False 4d22h Check the status of the cluster nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d22h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d22h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d22h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d22h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d22h v1.27.15+6147456 Check that there are no in-progress or failed pods. There should be no pods returned when you run the following command. USD oc get po -A | grep -E -iv 'running|complete' 17.1.6. Completing the Control Plane Only cluster update Follow these steps to perform the Control Plane Only cluster update and monitor the update through to completion. Important Control Plane Only updates were previously known as EUS-to-EUS updates. Control Plane Only updates are only viable between even-numbered minor versions of OpenShift Container Platform. 17.1.6.1. Acknowledging the Control Plane Only or y-stream update When you update to all versions from 4.11 and later, you must manually acknowledge that the update can continue. Important Before you acknowledge the update, verify that there are no Kubernetes APIs in use that are removed in the version you are updating to. For example, in OpenShift Container Platform 4.17, there are no API removals. See "Kubernetes API removals" for more information. Procedure Run the following command: USD oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-<update_version_from>-kube-<kube_api_version>-api-removals-in-<update_version_to>":"true"}}' --type=merge where: <update_version_from> Is the cluster version you are moving from, for example, 4.14 . <kube_api_version> Is kube API version, for example, 1.28 . <update_version_to> Is the cluster version you are moving to, for example, 4.15 . Verification Verify the update. Run the following command: USD oc get configmap admin-acks -n openshift-config -o json | jq .data Example output { "ack-4.14-kube-1.28-api-removals-in-4.15": "true", "ack-4.15-kube-1.29-api-removals-in-4.16": "true" } Note In this example, the cluster is updated from version 4.14 to 4.15, and then from 4.15 to 4.16 in a Control Plane Only update. Additional resources Kubernetes API removals 17.1.6.2. Starting the cluster update When updating from one y-stream release to the , you must ensure that the intermediate z-stream releases are also compatible. Note You can verify that you are updating to a viable release by running the oc adm upgrade command. The oc adm upgrade command lists the compatible update releases. Procedure Start the update: USD oc adm upgrade --to=4.15.33 Important Control Plane Only update : Make sure you point to the interim <y+1> release path Y-stream update - Make sure you use the correct <y.z> release that follows the Kubernetes version skew policy . Z-stream update - Verify that there are no problems moving to that specific release Example output Requested update to 4.15.33 1 1 The Requested update value changes depending on your particular update. Additional resources Selecting the target release 17.1.6.3. Monitoring the cluster update You should check the cluster health often during the update. Check for the node status, cluster Operators status and failed pods. Procedure Monitor the cluster update. For example, to monitor the cluster update from version 4.14 to 4.15, run the following command: USD watch "oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'" Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.34 True True 4m6s Working towards 4.15.33: 111 of 873 done (12% complete), waiting on kube-apiserver NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d23h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d23h console 4.14.34 True False False 4d22h ... storage 4.14.34 True False False 4d23h config-operator 4.15.33 True False False 4d23h etcd 4.15.33 True False False 4d23h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d23h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d23h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d23h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-marketplace redhat-marketplace-rf86t 0/1 ContainerCreating 0 0s Verification During the update the watch command cycles through one or several of the cluster Operators at a time, providing a status of the Operator update in the MESSAGE column. When the cluster Operators update process is complete, each control plane nodes is rebooted, one at a time. Note During this part of the update, messages are reported that state cluster Operators are being updated again or are in a degraded state. This is because the control plane node is offline while it reboots nodes. As soon as the last control plane node reboot is complete, the cluster version is displayed as updated. When the control plane update is complete a message such as the following is displayed. This example shows an update completed to the intermediate y-stream release. NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True False 28m Cluster version is 4.15.33 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d baremetal 4.15.33 True False False 5d cloud-controller-manager 4.15.33 True False False 5d1h cloud-credential 4.15.33 True False False 5d1h cluster-autoscaler 4.15.33 True False False 5d config-operator 4.15.33 True False False 5d console 4.15.33 True False False 5d ... service-ca 4.15.33 True False False 5d storage 4.15.33 True False False 5d NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d v1.28.13+2ca1a23 worker-1 Ready mcp-2,worker 5d v1.28.13+2ca1a23 17.1.6.4. Updating the OLM Operators In telco environments, software needs to vetted before it is loaded onto a production cluster. Production clusters are also configured in a disconnected network, which means that they are not always directly connected to the internet. Because the clusters are in a disconnected network, the OpenShift Operators are configured for manual update during installation so that new versions can be managed on a cluster-by-cluster basis. Perform the following procedure to move the Operators to the newer versions. Procedure Check to see which Operators need to be updated: USD oc get installplan -A | grep -E 'APPROVED|false' Example output NAMESPACE NAME CSV APPROVAL APPROVED metallb-system install-nwjnh metallb-operator.v4.16.0-202409202304 Manual false openshift-nmstate install-5r7wr kubernetes-nmstate-operator.4.16.0-202409251605 Manual false Patch the InstallPlan resources for those Operators: USD oc patch installplan -n metallb-system install-nwjnh --type merge --patch \ '{"spec":{"approved":true}}' Example output installplan.operators.coreos.com/install-nwjnh patched Monitor the namespace by running the following command: USD oc get all -n metallb-system Example output NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 0/1 ContainerCreating 0 4s pod/metallb-operator-controller-manager-77895bdb46-bqjdx 1/1 Running 0 4m1s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 0/1 ContainerCreating 0 4s pod/metallb-operator-webhook-server-d76f9c6c8-57r4w 1/1 Running 0 4m1s ... NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 0 4s replicaset.apps/metallb-operator-controller-manager-77895bdb46 1 1 1 4m1s replicaset.apps/metallb-operator-controller-manager-99b76f88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 0 4s replicaset.apps/metallb-operator-webhook-server-6f7dbfdb88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 1 1 1 4m1s When the update is complete, the required pods should be in a Running state, and the required ReplicaSet resources should be ready: NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 1/1 Running 0 25s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 1/1 Running 0 25s ... NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 1 25s replicaset.apps/metallb-operator-controller-manager-77895bdb46 0 0 0 4m22s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 1 25s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 0 0 0 4m22s Verification Verify that the Operators do not need to be updated for a second time: USD oc get installplan -A | grep -E 'APPROVED|false' There should be no output returned. Note Sometimes you have to approve an update twice because some Operators have interim z-stream release versions that need to be installed before the final version. Additional resources Updating the worker nodes 17.1.6.4.1. Performing the second y-stream update After completing the first y-stream update, you must update the y-stream control plane version to the new EUS version. Procedure Verify that the <4.y.z> release that you selected is still listed as a good channel to move to: USD oc adm upgrade Example output Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.29 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/7031404 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: eus-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:0521a0f1acd2d1b77f76259cb9bae9c743c60c37d9903806a3372c1414253658 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:6078cb4ae197b5b0c526910363b8aff540343bfac62ecb1ead9e068d541da27b 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:f2e0c593f6ed81250c11d0bac94dbaf63656223477b7e8693a652f933056af6e Note If you update soon after the initial GA of a new Y-stream release, you might not see new y-stream releases available when you run the oc adm upgrade command. Optional: View the potential update releases that are not recommended. Run the following command: USD oc adm upgrade --include-not-recommended Example output Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.29 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/7031404 for details and instructions. Upstream is unset, so the cluster will use an appropriate default.Channel: eus-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:0521a0f1acd2d1b77f76259cb9bae9c743c60c37d9903806a3372c1414253658 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:6078cb4ae197b5b0c526910363b8aff540343bfac62ecb1ead9e068d541da27b 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:f2e0c593f6ed81250c11d0bac94dbaf63656223477b7e8693a652f933056af6e Supported but not recommended updates: Version: 4.16.15 Image: quay.io/openshift-release-dev/ocp-release@sha256:671bc35e Recommended: Unknown Reason: EvaluationFailed Message: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461 Note The example shows a potential error that can affect clusters hosted in Microsoft Azure. It does not show risks for bare-metal clusters. 17.1.6.4.2. Acknowledging the y-stream release update When moving between y-stream releases, you must run a patch command to explicitly acknowledge the update. In the output of the oc adm upgrade command, a URL is provided that shows the specific command to run. Important Before you acknowledge the update, verify that there are no Kubernetes APIs in use that are removed in the version you are updating to. For example, in OpenShift Container Platform 4.17, there are no API removals. See "Kubernetes API removals" for more information. Procedure Acknowledge the y-stream release upgrade by patching the admin-acks config map in the openshift-config namespace. For example, run the following command: USD oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.15-kube-1.29-api-removals-in-4.16":"true"}}' --type=merge Example output configmap/admin-acks patched Additional resources Preparing to update to OpenShift Container Platform 4.17 17.1.6.5. Starting the y-stream control plane update After you have determined the full new release that you are moving to, you can run the oc adm upgrade -to=x.y.z command. Procedure Start the y-stream control plane update. For example, run the following command: USD oc adm upgrade --to=4.16.14 Example output Requested update to 4.16.14 You might move to a z-stream release that has potential issues with platforms other than the one you are running on. The following example shows a potential problem for cluster updates on Microsoft Azure: USD oc adm upgrade --to=4.16.15 Example output error: the update 4.16.15 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=Unknown risk and to proceed with update use --allow-not-recommended. Reason: EvaluationFailed Message: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461 Note The example shows a potential error that can affect clusters hosted in Microsoft Azure. It does not show risks for bare-metal clusters. USD oc adm upgrade --to=4.16.15 --allow-not-recommended Example output warning: with --allow-not-recommended you have accepted the risks with 4.14.11 and bypassed Recommended=Unknown EvaluationFailed: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461 Requested update to 4.16.15 17.1.6.6. Monitoring the second part of a <y+1> cluster update Monitor the second part of the cluster update to the <y+1> version. Procedure Monitor the progress of the second part of the <y+1> update. For example, to monitor the update from 4.15 to 4.16, run the following command: USD watch "oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.15; oc get co | grep 4.16; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'" Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True True 10m Working towards 4.16.14: 132 of 903 done (14% complete), waiting on kube-controller-manager, kube-scheduler NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d3h baremetal 4.15.33 True False False 5d4h cloud-controller-manager 4.15.33 True False False 5d4h cloud-credential 4.15.33 True False False 5d4h cluster-autoscaler 4.15.33 True False False 5d4h console 4.15.33 True False False 5d3h ... config-operator 4.16.14 True False False 5d4h etcd 4.16.14 True False False 5d4h kube-apiserver 4.16.14 True True False 5d4h NodeInstallerProgressing: 1 node is at revision 15; 2 nodes are at revision 17 NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d4h v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d4h v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d4h v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d4h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d4h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-kube-apiserver kube-apiserver-ctrl-plane-0 0/5 Pending 0 <invalid> As soon as the last control plane node is complete, the cluster version is updated to the new EUS release. For example: NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 123m Cluster version is 4.16.14 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d6h baremetal 4.16.14 True False False 5d7h cloud-controller-manager 4.16.14 True False False 5d7h cloud-credential 4.16.14 True False False 5d7h cluster-autoscaler 4.16.14 True False False 5d7h config-operator 4.16.14 True False False 5d7h console 4.16.14 True False False 5d6h #... operator-lifecycle-manager-packageserver 4.16.14 True False False 5d7h service-ca 4.16.14 True False False 5d7h storage 4.16.14 True False False 5d7h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d7h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d7h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d7h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d7h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d7h v1.27.15+6147456 Additional resources Monitoring the cluster update 17.1.6.7. Updating all the OLM Operators In the second phase of a multi-version upgrade, you must approve all of the Operators and additionally add installations plans for any other Operators that you want to upgrade. Follow the same procedure as outlined in "Updating the OLM Operators". Ensure that you also update any non-OLM Operators as required. Procedure Monitor the cluster update. For example, to monitor the cluster update from version 4.14 to 4.15, run the following command: USD watch "oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'" Check to see which Operators need to be updated: USD oc get installplan -A | grep -E 'APPROVED|false' Patch the InstallPlan resources for those Operators: USD oc patch installplan -n metallb-system install-nwjnh --type merge --patch \ '{"spec":{"approved":true}}' Monitor the namespace by running the following command: USD oc get all -n metallb-system When the update is complete, the required pods should be in a Running state, and the required ReplicaSet resources should be ready. Verification During the update the watch command cycles through one or several of the cluster Operators at a time, providing a status of the Operator update in the MESSAGE column. When the cluster Operators update process is complete, each control plane nodes is rebooted, one at a time. Note During this part of the update, messages are reported that state cluster Operators are being updated again or are in a degraded state. This is because the control plane node is offline while it reboots nodes. Additional resources Monitoring the cluster update Updating the OLM Operators 17.1.6.8. Updating the worker nodes You upgrade the worker nodes after you have updated the control plane by unpausing the relevant mcp groups you created. Unpausing the mcp group starts the upgrade process for the worker nodes in that group. Each of the worker nodes in the cluster reboot to upgrade to the new EUS, y-stream or z-stream version as required. In the case of Control Plane Only upgrades note that when a worker node is updated it will only require one reboot and will jump <y+2>-release versions. This is a feature that was added to decrease the amount of time that it takes to upgrade large bare-metal clusters. Important This is a potential holding point. You can have a cluster version that is fully supported to run in production with the control plane that is updated to a new EUS release while the worker nodes are at a <y-2>-release. This allows large clusters to upgrade in steps across several maintenance windows. You can check how many nodes are managed in an mcp group. Run the following command to get the list of mcp groups: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d Note You decide how many mcp groups to upgrade at a time. This depends on how many CNF pods can be taken down at a time and how your pod disruption budget and anti-affinity settings are configured. Get the list of nodes in the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456 Confirm the MachineConfigPool groups that are paused: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 true mcp-2 true Note Each MachineConfigPool can be unpaused independently. Therefore, if a maintenance window runs out of time other MCPs do not need to be unpaused immediately. The cluster is supported to run with some worker nodes still at <y-2>-release version. Unpause the required mcp group to begin the upgrade: USD oc patch mcp/mcp-1 --type merge --patch '{"spec":{"paused":false}}' Example output machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched Confirm that the required mcp group is unpaused: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 false mcp-2 true As each mcp group is upgraded, continue to unpause and upgrade the remaining nodes. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456 17.1.6.9. Verifying the health of the newly updated cluster Run the following commands after updating the cluster to verify that the cluster is back up and running. Procedure Check the cluster version by running the following command: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14 This should return the new cluster version and the PROGRESSING column should return False . Check that all nodes are ready: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d All nodes in the cluster should be in a Ready status and running the same version. Check that there are no paused mcp resources in the cluster: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 false mcp-2 false Check that all cluster Operators are available: USD oc get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h All cluster Operators should report True in the AVAILABLE column. Check that all pods are healthy: USD oc get po -A | grep -E -iv 'complete|running' This should not return any pods. Note You might see a few pods still moving after the update. Watch this for a while to make sure all pods are cleared. 17.1.7. Completing the y-stream cluster update Follow these steps to perform the y-stream cluster update and monitor the update through to completion. Completing a y-stream update is more straightforward than a Control Plane Only update. 17.1.7.1. Acknowledging the Control Plane Only or y-stream update When you update to all versions from 4.11 and later, you must manually acknowledge that the update can continue. Important Before you acknowledge the update, verify that there are no Kubernetes APIs in use that are removed in the version you are updating to. For example, in OpenShift Container Platform 4.17, there are no API removals. See "Kubernetes API removals" for more information. Procedure Run the following command: USD oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-<update_version_from>-kube-<kube_api_version>-api-removals-in-<update_version_to>":"true"}}' --type=merge where: <update_version_from> Is the cluster version you are moving from, for example, 4.14 . <kube_api_version> Is kube API version, for example, 1.28 . <update_version_to> Is the cluster version you are moving to, for example, 4.15 . Verification Verify the update. Run the following command: USD oc get configmap admin-acks -n openshift-config -o json | jq .data Example output { "ack-4.14-kube-1.28-api-removals-in-4.15": "true", "ack-4.15-kube-1.29-api-removals-in-4.16": "true" } Note In this example, the cluster is updated from version 4.14 to 4.15, and then from 4.15 to 4.16 in a Control Plane Only update. Additional resources Kubernetes API removals 17.1.7.2. Starting the cluster update When updating from one y-stream release to the , you must ensure that the intermediate z-stream releases are also compatible. Note You can verify that you are updating to a viable release by running the oc adm upgrade command. The oc adm upgrade command lists the compatible update releases. Procedure Start the update: USD oc adm upgrade --to=4.15.33 Important Control Plane Only update : Make sure you point to the interim <y+1> release path Y-stream update - Make sure you use the correct <y.z> release that follows the Kubernetes version skew policy . Z-stream update - Verify that there are no problems moving to that specific release Example output Requested update to 4.15.33 1 1 The Requested update value changes depending on your particular update. Additional resources Selecting the target release 17.1.7.3. Monitoring the cluster update You should check the cluster health often during the update. Check for the node status, cluster Operators status and failed pods. Procedure Monitor the cluster update. For example, to monitor the cluster update from version 4.14 to 4.15, run the following command: USD watch "oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'" Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.34 True True 4m6s Working towards 4.15.33: 111 of 873 done (12% complete), waiting on kube-apiserver NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d23h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d23h console 4.14.34 True False False 4d22h ... storage 4.14.34 True False False 4d23h config-operator 4.15.33 True False False 4d23h etcd 4.15.33 True False False 4d23h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d23h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d23h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d23h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-marketplace redhat-marketplace-rf86t 0/1 ContainerCreating 0 0s Verification During the update the watch command cycles through one or several of the cluster Operators at a time, providing a status of the Operator update in the MESSAGE column. When the cluster Operators update process is complete, each control plane nodes is rebooted, one at a time. Note During this part of the update, messages are reported that state cluster Operators are being updated again or are in a degraded state. This is because the control plane node is offline while it reboots nodes. As soon as the last control plane node reboot is complete, the cluster version is displayed as updated. When the control plane update is complete a message such as the following is displayed. This example shows an update completed to the intermediate y-stream release. NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True False 28m Cluster version is 4.15.33 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d baremetal 4.15.33 True False False 5d cloud-controller-manager 4.15.33 True False False 5d1h cloud-credential 4.15.33 True False False 5d1h cluster-autoscaler 4.15.33 True False False 5d config-operator 4.15.33 True False False 5d console 4.15.33 True False False 5d ... service-ca 4.15.33 True False False 5d storage 4.15.33 True False False 5d NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d v1.28.13+2ca1a23 worker-1 Ready mcp-2,worker 5d v1.28.13+2ca1a23 17.1.7.4. Updating the OLM Operators In telco environments, software needs to vetted before it is loaded onto a production cluster. Production clusters are also configured in a disconnected network, which means that they are not always directly connected to the internet. Because the clusters are in a disconnected network, the OpenShift Operators are configured for manual update during installation so that new versions can be managed on a cluster-by-cluster basis. Perform the following procedure to move the Operators to the newer versions. Procedure Check to see which Operators need to be updated: USD oc get installplan -A | grep -E 'APPROVED|false' Example output NAMESPACE NAME CSV APPROVAL APPROVED metallb-system install-nwjnh metallb-operator.v4.16.0-202409202304 Manual false openshift-nmstate install-5r7wr kubernetes-nmstate-operator.4.16.0-202409251605 Manual false Patch the InstallPlan resources for those Operators: USD oc patch installplan -n metallb-system install-nwjnh --type merge --patch \ '{"spec":{"approved":true}}' Example output installplan.operators.coreos.com/install-nwjnh patched Monitor the namespace by running the following command: USD oc get all -n metallb-system Example output NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 0/1 ContainerCreating 0 4s pod/metallb-operator-controller-manager-77895bdb46-bqjdx 1/1 Running 0 4m1s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 0/1 ContainerCreating 0 4s pod/metallb-operator-webhook-server-d76f9c6c8-57r4w 1/1 Running 0 4m1s ... NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 0 4s replicaset.apps/metallb-operator-controller-manager-77895bdb46 1 1 1 4m1s replicaset.apps/metallb-operator-controller-manager-99b76f88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 0 4s replicaset.apps/metallb-operator-webhook-server-6f7dbfdb88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 1 1 1 4m1s When the update is complete, the required pods should be in a Running state, and the required ReplicaSet resources should be ready: NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 1/1 Running 0 25s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 1/1 Running 0 25s ... NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 1 25s replicaset.apps/metallb-operator-controller-manager-77895bdb46 0 0 0 4m22s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 1 25s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 0 0 0 4m22s Verification Verify that the Operators do not need to be updated for a second time: USD oc get installplan -A | grep -E 'APPROVED|false' There should be no output returned. Note Sometimes you have to approve an update twice because some Operators have interim z-stream release versions that need to be installed before the final version. Additional resources Updating the worker nodes 17.1.7.5. Updating the worker nodes You upgrade the worker nodes after you have updated the control plane by unpausing the relevant mcp groups you created. Unpausing the mcp group starts the upgrade process for the worker nodes in that group. Each of the worker nodes in the cluster reboot to upgrade to the new EUS, y-stream or z-stream version as required. In the case of Control Plane Only upgrades note that when a worker node is updated it will only require one reboot and will jump <y+2>-release versions. This is a feature that was added to decrease the amount of time that it takes to upgrade large bare-metal clusters. Important This is a potential holding point. You can have a cluster version that is fully supported to run in production with the control plane that is updated to a new EUS release while the worker nodes are at a <y-2>-release. This allows large clusters to upgrade in steps across several maintenance windows. You can check how many nodes are managed in an mcp group. Run the following command to get the list of mcp groups: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d Note You decide how many mcp groups to upgrade at a time. This depends on how many CNF pods can be taken down at a time and how your pod disruption budget and anti-affinity settings are configured. Get the list of nodes in the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456 Confirm the MachineConfigPool groups that are paused: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 true mcp-2 true Note Each MachineConfigPool can be unpaused independently. Therefore, if a maintenance window runs out of time other MCPs do not need to be unpaused immediately. The cluster is supported to run with some worker nodes still at <y-2>-release version. Unpause the required mcp group to begin the upgrade: USD oc patch mcp/mcp-1 --type merge --patch '{"spec":{"paused":false}}' Example output machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched Confirm that the required mcp group is unpaused: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 false mcp-2 true As each mcp group is upgraded, continue to unpause and upgrade the remaining nodes. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456 17.1.7.6. Verifying the health of the newly updated cluster Run the following commands after updating the cluster to verify that the cluster is back up and running. Procedure Check the cluster version by running the following command: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14 This should return the new cluster version and the PROGRESSING column should return False . Check that all nodes are ready: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d All nodes in the cluster should be in a Ready status and running the same version. Check that there are no paused mcp resources in the cluster: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 false mcp-2 false Check that all cluster Operators are available: USD oc get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h All cluster Operators should report True in the AVAILABLE column. Check that all pods are healthy: USD oc get po -A | grep -E -iv 'complete|running' This should not return any pods. Note You might see a few pods still moving after the update. Watch this for a while to make sure all pods are cleared. 17.1.8. Completing the z-stream cluster update Follow these steps to perform the z-stream cluster update and monitor the update through to completion. Completing a z-stream update is more straightforward than a Control Plane Only or y-stream update. 17.1.8.1. Starting the cluster update When updating from one y-stream release to the , you must ensure that the intermediate z-stream releases are also compatible. Note You can verify that you are updating to a viable release by running the oc adm upgrade command. The oc adm upgrade command lists the compatible update releases. Procedure Start the update: USD oc adm upgrade --to=4.15.33 Important Control Plane Only update : Make sure you point to the interim <y+1> release path Y-stream update - Make sure you use the correct <y.z> release that follows the Kubernetes version skew policy . Z-stream update - Verify that there are no problems moving to that specific release Example output Requested update to 4.15.33 1 1 The Requested update value changes depending on your particular update. Additional resources Selecting the target release 17.1.8.2. Updating the worker nodes You upgrade the worker nodes after you have updated the control plane by unpausing the relevant mcp groups you created. Unpausing the mcp group starts the upgrade process for the worker nodes in that group. Each of the worker nodes in the cluster reboot to upgrade to the new EUS, y-stream or z-stream version as required. In the case of Control Plane Only upgrades note that when a worker node is updated it will only require one reboot and will jump <y+2>-release versions. This is a feature that was added to decrease the amount of time that it takes to upgrade large bare-metal clusters. Important This is a potential holding point. You can have a cluster version that is fully supported to run in production with the control plane that is updated to a new EUS release while the worker nodes are at a <y-2>-release. This allows large clusters to upgrade in steps across several maintenance windows. You can check how many nodes are managed in an mcp group. Run the following command to get the list of mcp groups: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d Note You decide how many mcp groups to upgrade at a time. This depends on how many CNF pods can be taken down at a time and how your pod disruption budget and anti-affinity settings are configured. Get the list of nodes in the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456 Confirm the MachineConfigPool groups that are paused: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 true mcp-2 true Note Each MachineConfigPool can be unpaused independently. Therefore, if a maintenance window runs out of time other MCPs do not need to be unpaused immediately. The cluster is supported to run with some worker nodes still at <y-2>-release version. Unpause the required mcp group to begin the upgrade: USD oc patch mcp/mcp-1 --type merge --patch '{"spec":{"paused":false}}' Example output machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched Confirm that the required mcp group is unpaused: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 false mcp-2 true As each mcp group is upgraded, continue to unpause and upgrade the remaining nodes. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456 17.1.8.3. Verifying the health of the newly updated cluster Run the following commands after updating the cluster to verify that the cluster is back up and running. Procedure Check the cluster version by running the following command: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14 This should return the new cluster version and the PROGRESSING column should return False . Check that all nodes are ready: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d All nodes in the cluster should be in a Ready status and running the same version. Check that there are no paused mcp resources in the cluster: USD oc get mcp -o json | jq -r '["MCP","Paused"], ["---","------"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker Example output MCP Paused --- ------ master false mcp-1 false mcp-2 false Check that all cluster Operators are available: USD oc get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h All cluster Operators should report True in the AVAILABLE column. Check that all pods are healthy: USD oc get po -A | grep -E -iv 'complete|running' This should not return any pods. Note You might see a few pods still moving after the update. Watch this for a while to make sure all pods are cleared. 17.2. Troubleshooting and maintaining telco core CNF clusters 17.2.1. Troubleshooting and maintaining telco core CNF clusters Troubleshooting and maintenance are weekly tasks that can be a challenge if you do not have the tools to reach your goal, whether you want to update a component or investigate an issue. Part of the challenge is knowing where and how to search for tools and answers. To maintain and troubleshoot a bare-metal environment where high-bandwidth network throughput is required, see the following procedures. Important This troubleshooting information is not a reference for configuring OpenShift Container Platform or developing Cloud-native Network Function (CNF) applications. For information about developing CNF applications for telco, see Red Hat Best Practices for Kubernetes . 17.2.1.1. Cloud-native Network Functions If you are starting to use OpenShift Container Platform for telecommunications Cloud-native Network Function (CNF) applications, learning about CNFs can help you understand the issues that you might encounter. To learn more about CNFs and their evolution, see VNF and CNF, what's the difference? . 17.2.1.2. Getting Support If you experience difficulty with a procedure, visit the Red Hat Customer Portal . From the Customer Portal, you can find help in various ways: Search or browse through the Red Hat Knowledgebase of articles and solutions about Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your deployment, you can use the debugging tool or check the health endpoint of your deployment. After you have debugged or obtained health information about your deployment, you can search the Red Hat Knowledgebase for a solution or file a support ticket. 17.2.1.2.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. 17.2.1.2.2. Searching the Red Hat Knowledgebase In the event of an OpenShift Container Platform issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . Click Search . In the search field, input keywords and strings relating to the problem, including: OpenShift Container Platform components (such as etcd ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click the Enter key. Optional: Select the OpenShift Container Platform product filter. Optional: Select the Documentation content type filter. 17.2.1.2.3. Submitting a support case Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have a Red Hat Customer Portal account. You have a Red Hat Standard or Premium subscription. Procedure Log in to the Customer Support page of the Red Hat Customer Portal. Click Get support . On the Cases tab of the Customer Support page: Optional: Change the pre-filled account and owner details if needed. Select the appropriate category for your issue, such as Bug or Defect , and click Continue . Enter the following information: In the Summary field, enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Select OpenShift Container Platform from the Product drop-down menu. Select 4.17 from the Version drop-down. Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue . Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue . Ensure that the account information presented is as expected, and if not, amend accordingly. Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID. To manually obtain your cluster ID using the OpenShift Container Platform web console: Navigate to Home Overview . Find the value in the Cluster ID field of the Details section. Alternatively, it is possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled. From the toolbar, navigate to (?) Help Open Support Case . The Cluster ID value is autofilled. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' Complete the following questions where prompted and then click Continue : What are you experiencing? What are you expecting to happen? Define the value or impact to you or the business. Where are you experiencing this behavior? What environment? When does this behavior occur? Frequency? Repeatedly? At certain times? Upload relevant diagnostic data files and click Continue . It is recommended to include data gathered using the oc adm must-gather command as a starting point, plus any issue specific data that is not collected by that command. Input relevant case management details and click Continue . Preview the case details and click Submit . 17.2.2. General troubleshooting When you encounter a problem, the first step is to find the specific area where the issue is happening. To narrow down the potential problematic areas, complete one or more tasks: Query your cluster Check your pod logs Debug a pod Review events 17.2.2.1. Querying your cluster Get information about your cluster so that you can more accurately find potential problems. Procedure Switch into a project by running the following command: USD oc project <project_name> Query your cluster version, cluster Operator, and node within that namespace by running the following command: USD oc get clusterversion,clusteroperator,node Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.16.11 True False 62d Cluster version is 4.16.11 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.16.11 True False False 62d clusteroperator.config.openshift.io/baremetal 4.16.11 True False False 62d clusteroperator.config.openshift.io/cloud-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/cloud-credential 4.16.11 True False False 62d clusteroperator.config.openshift.io/cluster-autoscaler 4.16.11 True False False 62d clusteroperator.config.openshift.io/config-operator 4.16.11 True False False 62d clusteroperator.config.openshift.io/console 4.16.11 True False False 62d clusteroperator.config.openshift.io/control-plane-machine-set 4.16.11 True False False 62d clusteroperator.config.openshift.io/csi-snapshot-controller 4.16.11 True False False 62d clusteroperator.config.openshift.io/dns 4.16.11 True False False 62d clusteroperator.config.openshift.io/etcd 4.16.11 True False False 62d clusteroperator.config.openshift.io/image-registry 4.16.11 True False False 55d clusteroperator.config.openshift.io/ingress 4.16.11 True False False 62d clusteroperator.config.openshift.io/insights 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-apiserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-scheduler 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-storage-version-migrator 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-api 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-approver 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-config 4.16.11 True False False 62d clusteroperator.config.openshift.io/marketplace 4.16.11 True False False 62d clusteroperator.config.openshift.io/monitoring 4.16.11 True False False 62d clusteroperator.config.openshift.io/network 4.16.11 True False False 62d clusteroperator.config.openshift.io/node-tuning 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-apiserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-samples 4.16.11 True False False 35d clusteroperator.config.openshift.io/operator-lifecycle-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/operator-lifecycle-manager-catalog 4.16.11 True False False 62d clusteroperator.config.openshift.io/operator-lifecycle-manager-packageserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/service-ca 4.16.11 True False False 62d clusteroperator.config.openshift.io/storage 4.16.11 True False False 62d NAME STATUS ROLES AGE VERSION node/ctrl-plane-0 Ready control-plane,master,worker 62d v1.29.7 node/ctrl-plane-1 Ready control-plane,master,worker 62d v1.29.7 node/ctrl-plane-2 Ready control-plane,master,worker 62d v1.29.7 For more information, see "oc get" and "Reviewing pod status". Additional resources oc get Reviewing pod status 17.2.2.2. Checking pod logs Get logs from the pod so that you can review the logs for issues. Procedure List the pods by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE busybox-1 1/1 Running 168 (34m ago) 7d busybox-2 1/1 Running 119 (9m20s ago) 4d23h busybox-3 1/1 Running 168 (43m ago) 7d busybox-4 1/1 Running 168 (43m ago) 7d Check pod log files by running the following command: USD oc logs -n <namespace> busybox-1 For more information, see "oc logs", "Logging", and "Inspecting pod and container logs". Additional resources oc logs Logging Inspecting pod and container logs 17.2.2.3. Describing a pod Describing a pod gives you information about that pod to help with troubleshooting. The Events section provides detailed information about the pod and the containers inside of it. Procedure Describe a pod by running the following command: USD oc describe pod -n <namespace> busybox-1 Example output Name: busybox-1 Namespace: busy Priority: 0 Service Account: default Node: worker-3/192.168.0.0 Start Time: Mon, 27 Nov 2023 14:41:25 -0500 Labels: app=busybox pod-template-hash=<hash> Annotations: k8s.ovn.org/pod-networks: ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 41m (x170 over 7d1h) kubelet Container image "quay.io/quay/busybox:latest" already present on machine Normal Created 41m (x170 over 7d1h) kubelet Created container busybox Normal Started 41m (x170 over 7d1h) kubelet Started container busybox For more information, see "oc describe". Additional resources oc describe 17.2.2.4. Reviewing events You can review the events in a given namespace to find potential issues. Procedure Check for events in your namespace by running the following command: USD oc get events -n <namespace> --sort-by=".metadata.creationTimestamp" 1 1 Adding the --sort-by=".metadata.creationTimestamp" flag places the most recent events at the end of the output. Optional: If the events within your specified namespace do not provide enough information, expand your query to all namespaces by running the following command: USD oc get events -A --sort-by=".metadata.creationTimestamp" 1 1 The --sort-by=".metadata.creationTimestamp" flag places the most recent events at the end of the output. To filter the results of all events from a cluster, you can use the grep command. For example, if you are looking for errors, the errors can appear in two different sections of the output: the TYPE or MESSAGE sections. With the grep command, you can search for keywords, such as error or failed . For example, search for a message that contains warning or error by running the following command: USD oc get events -A | grep -Ei "warning|error" Example output NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE openshift 59s Warning FailedMount pod/openshift-1 MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" : references non-existent secret key: test Optional: To clean up the events and see only recurring events, you can delete the events in the relevant namespace by running the following command: USD oc delete events -n <namespace> --all For more information, see "Watching cluster events". Additional resources Watching cluster events 17.2.2.5. Connecting to a pod You can directly connect to a currently running pod with the oc rsh command, which provides you with a shell on that pod. Warning In pods that run a low-latency application, latency issues can occur when you run the oc rsh command. Use the oc rsh command only if you cannot connect to the node by using the oc debug command. Procedure Connect to your pod by running the following command: USD oc rsh -n <namespace> busybox-1 For more information, see "oc rsh" and "Accessing running pods". Additional resources oc rsh Accessing running pods 17.2.2.6. Debugging a pod In certain cases, you do not want to directly interact with your pod that is in production. To avoid interfering with running traffic, you can use a secondary pod that is a copy of your original pod. The secondary pod uses the same components as that of the original pod but does not have running traffic. Procedure List the pods by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE busybox-1 1/1 Running 168 (34m ago) 7d busybox-2 1/1 Running 119 (9m20s ago) 4d23h busybox-3 1/1 Running 168 (43m ago) 7d busybox-4 1/1 Running 168 (43m ago) 7d Debug a pod by running the following command: USD oc debug -n <namespace> busybox-1 Example output Starting pod/busybox-1-debug, command was: sleep 3600 Pod IP: 10.133.2.11 If you do not see a shell prompt, press Enter. For more information, see "oc debug" and "Starting debug pods with root access". Additional resources oc debug Starting debug pods with root access 17.2.2.7. Running a command on a pod If you want to run a command or set of commands on a pod without directly logging into it, you can use the oc exec -it command. You can interact with the pod quickly to get process or output information from the pod. A common use case is to run the oc exec -it command inside a script to run the same command on multiple pods in a replica set or deployment. Warning In pods that run a low-latency application, the oc exec command can cause latency issues. Procedure To run a command on a pod without logging into it, run the following command: USD oc exec -it <pod> -- <command> For more information, see "oc exec" and "Executing remote commands in containers". Additional resources oc exec Executing remote commands in containers 17.2.3. Cluster maintenance In telco networks, you must pay more attention to certain configurations due the nature of bare-metal deployments. You can troubleshoot more effectively by completing these tasks: Monitor for failed or failing hardware components Periodically check the status of the cluster Operators Note For hardware monitoring, contact your hardware vendor to find the appropriate logging tool for your specific hardware. 17.2.3.1. Checking cluster Operators Periodically check the status of your cluster Operators to find issues early. Procedure Check the status of the cluster Operators by running the following command: USD oc get co 17.2.3.2. Watching for failed pods To reduce troubleshooting time, regularly monitor for failed pods in your cluster. Procedure To watch for failed pods, run the following command: USD oc get po -A | grep -Eiv 'complete|running' 17.2.4. Security Implementing a robust cluster security profile is important for building resilient telco networks. 17.2.4.1. Authentication Determine which identity providers are in your cluster. For more information about supported identity providers, see "Supported identity providers" in Authentication and authorization . After you know which providers are configured, you can inspect the openshift-authentication namespace to determine if there are potential issues. Procedure Check the events in the openshift-authentication namespace by running the following command: USD oc get events -n openshift-authentication --sort-by='.metadata.creationTimestamp' Check the pods in the openshift-authentication namespace by running the following command: USD oc get pod -n openshift-authentication Optional: If you need more information, check the logs of one of the running pods by running the following command: USD oc logs -n openshift-authentication <pod_name> Additional resources Supported identity providers 17.2.5. Certificate maintenance Certificate maintenance is required for continuous cluster authentication. As a cluster administrator, you must manually renew certain certificates, while others are automatically renewed by the cluster. Learn about certificates in OpenShift Container Platform and how to maintain them by using the following resources: Which OpenShift certificates do rotate automatically and which do not in Openshift 4.x? Checking etcd certificate expiry in OpenShift 4 17.2.5.1. Certificates manually managed by the administrator The following certificates must be renewed by a cluster administrator: Proxy certificates User-provisioned certificates for the API server 17.2.5.1.1. Managing proxy certificates Proxy certificates allow users to specify one or more custom certificate authority (CA) certificates that are used by platform components when making egress connections. Note Certain CAs set expiration dates and you might need to renew these certificates every two years. If you did not originally set the requested certificates, you can determine the certificate expiration in several ways. Most Cloud-native Network Functions (CNFs) use certificates that are not specifically designed for browser-based connectivity. Therefore, you need to pull the certificate from the ConfigMap object of your deployment. Procedure To get the expiration date, run the following command against the certificate file: USD openssl x509 -enddate -noout -in <cert_file_name>.pem For more information about determining how and when to renew your proxy certificates, see "Proxy certificates" in Security and compliance . Additional resources Proxy certificates 17.2.5.1.2. User-provisioned API server certificates The API server is accessible by clients that are external to the cluster at api.<cluster_name>.<base_domain> . You might want clients to access the API server at a different hostname or without the need to distribute the cluster-managed certificate authority (CA) certificates to the clients. You must set a custom default certificate to be used by the API server when serving content. For more information, see "User-provided certificates for the API server" in Security and compliance Additional resources User-provisioned certificates for the API server 17.2.5.2. Certificates managed by the cluster You only need to check cluster-managed certificates if you detect an issue in the logs. The following certificates are automatically managed by the cluster: Service CA certificates Node certificates Bootstrap certificates etcd certificates OLM certificates Machine Config Operator certificates Monitoring and cluster logging Operator component certificates Control plane certificates Ingress certificates Additional resources Service CA certificates Node certificates Bootstrap certificates etcd certificates OLM certificates Machine Config Operator certificates Monitoring and cluster logging Operator component certificates Control plane certificates Ingress certificates 17.2.5.2.1. Certificates managed by etcd The etcd certificates are used for encrypted communication between etcd member peers as well as encrypted client traffic. The certificates are renewed automatically within the cluster provided that communication between all nodes and all services is current. Therefore, if your cluster might lose communication between components during a specific period of time, which is close to the end of the etcd certificate lifetime, it is recommended to renew the certificate in advance. For example, communication can be lost during an upgrade due to nodes rebooting at different times. You can manually renew etcd certificates by running the following command: USD for each in USD(oc get secret -n openshift-etcd | grep "kubernetes.io/tls" | grep -e \ "etcd-peer\|etcd-serving" | awk '{print USD1}'); do oc get secret USDeach -n openshift-etcd -o \ jsonpath="{.data.tls\.crt}" | base64 -d | openssl x509 -noout -enddate; done For more information about updating etcd certificates, see Checking etcd certificate expiry in OpenShift 4 . For more information about etcd certificates, see "etcd certificates" in Security and compliance . Additional resources etcd certificates 17.2.5.2.2. Node certificates Node certificates are self-signed certificates, which means that they are signed by the cluster and they originate from an internal certificate authority (CA) that is generated by the bootstrap process. After the cluster is installed, the cluster automatically renews the node certificates. For more information, see "Node certificates" in Security and compliance . Additional resources Node certificates 17.2.5.2.3. Service CA certificates The service-ca is an Operator that creates a self-signed certificate authority (CA) when an OpenShift Container Platform cluster is deployed. This allows user to add certificates to their deployments without manually creating them. Service CA certificates are self-signed certificates. For more information, see "Service CA certificates" in Security and compliance . Additional resources Service CA certificates 17.2.6. Machine Config Operator The Machine Config Operator provides useful information to cluster administrators and controls what is running directly on the bare-metal host. The Machine Config Operator differentiates between different groups of nodes in the cluster, allowing control plane nodes and worker nodes to run with different configurations. These groups of nodes run worker or application pods, which are called MachineConfigPool ( mcp ) groups. The same machine config is applied on all nodes or only on one MCP in the cluster. For more information about how and why to apply MCPs in a telco core cluster, see Applying MachineConfigPool labels to nodes before the update . For more information about the Machine Config Operator, see Machine Config Operator . 17.2.6.1. Purpose of the Machine Config Operator The Machine Config Operator (MCO) manages and applies configuration and updates of Red Hat Enterprise Linux CoreOS (RHCOS) and container runtime, including everything between the kernel and kubelet. Managing RHCOS is important since most telecommunications companies run on bare-metal hardware and use some sort of hardware accelerator or kernel modification. Applying machine configuration to RHCOS manually can cause problems because the MCO monitors each node and what is applied to it. You must consider these minor components and how the MCO can help you manage your clusters effectively. Important You must use the MCO to perform all changes on worker or control plane nodes. Do not manually make changes to RHCOS or node files. 17.2.6.2. Applying several machine config files at the same time When you need to change the machine config for a group of nodes in the cluster, also known as machine config pools (MCPs), sometimes the changes must be applied with several different machine config files. The nodes need to restart for the machine config file to be applied. After each machine config file is applied to the cluster, all nodes restart that are affected by the machine config file. To prevent the nodes from restarting for each machine config file, you can apply all of the changes at the same time by pausing each MCP that is updated by the new machine config file. Procedure Pause the affected MCP by running the following command: USD oc patch mcp/<mcp_name> --type merge --patch '{"spec":{"paused":true}}' After you apply all machine config changes to the cluster, run the following command: USD oc patch mcp/<mcp_name> --type merge --patch '{"spec":{"paused":false}}' This allows the nodes in your MCP to reboot into the new configurations. 17.2.7. Bare-metal node maintenance You can connect to a node for general troubleshooting. However, in some cases, you need to perform troubleshooting or maintenance tasks on certain hardware components. This section discusses topics that you need to perform that hardware maintenance. 17.2.7.1. Connecting to a bare-metal node in your cluster You can connect to bare-metal cluster nodes for general maintenance tasks. Note Configuring the cluster node from the host operating system is not recommended or supported. To troubleshoot your nodes, you can do the following tasks: Retrieve logs from node Use debugging Use SSH to connect to the node Important Use SSH only if you cannot connect to the node with the oc debug command. Procedure Retrieve the logs from a node by running the following command: USD oc adm node-logs <node_name> -u crio Use debugging by running the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Output You are now logged in as root on the node Optional: Use SSH to connect to the node by running the following command: USD ssh core@<node_name> 17.2.7.2. Moving applications to pods within the cluster For scheduled hardware maintenance, you need to consider how to move your application pods to other nodes within the cluster without affecting the pod workload. Procedure Mark the node as unschedulable by running the following command: USD oc adm cordon <node_name> When the node is unschedulable, no pods can be scheduled on the node. For more information, see "Working with nodes". Note When moving CNF applications, you might need to verify ahead of time that there are enough additional worker nodes in the cluster due to anti-affinity and pod disruption budget. Additional resources Working with nodes 17.2.7.3. DIMM memory replacement Dual in-line memory module (DIMM) problems sometimes only appear after a server reboots. You can check the log files for these problems. When you perform a standard reboot and the server does not start, you can see a message in the console that there is a faulty DIMM memory. In that case, you can acknowledge the faulty DIMM and continue rebooting if the remaining memory is sufficient. Then, you can schedule a maintenance window to replace the faulty DIMM. Sometimes, a message in the event logs indicates a bad memory module. In these cases, you can schedule the memory replacement before the server is rebooted. Additional resources OpenShift Container Platform storage overview 17.2.7.4. Disk replacement If you do not have disk redundancy configured on your node through hardware or software redundant array of independent disks (RAID), you need to check the following: Does the disk contain running pod images? Does the disk contain persistent data for pods? For more information, see "OpenShift Container Platform storage overview" in Storage . 17.2.7.5. Cluster network card replacement When you replace a network card, the MAC address changes. The MAC address can be part of the DHCP or SR-IOV Operator configuration, router configuration, firewall rules, or application Cloud-native Network Function (CNF) configuration. Before you bring back a node online after replacing a network card, you must verify that these configurations are up-to-date. Important If you do not have specific procedures for MAC address changes within the network, contact your network administrator or network hardware vendor. 17.3. Observability 17.3.1. Observability in OpenShift Container Platform OpenShift Container Platform generates a large amount of data, such as performance metrics and logs from both the platform and the workloads running on it. As an administrator, you can use various tools to collect and analyze all the data available. What follows is an outline of best practices for system engineers, architects, and administrators configuring the observability stack. Unless explicitly stated, the material in this document refers to both Edge and Core deployments. 17.3.1.1. Understanding the monitoring stack The monitoring stack uses the following components: Prometheus collects and analyzes metrics from OpenShift Container Platform components and from workloads, if configured to do so. Alertmanager is a component of Prometheus that handles routing, grouping, and silencing of alerts. Thanos handles long term storage of metrics. Figure 17.2. OpenShift Container Platform monitoring architecture Note For a single-node OpenShift cluster, you should disable Alertmanager and Thanos because the cluster sends all metrics to the hub cluster for analysis and retention. Additional resources About OpenShift Container Platform monitoring Core platform monitoring first steps 17.3.1.2. Key performance metrics Depending on your system, there can be hundreds of available measurements. Here are some key metrics that you should pay attention to: etcd response times API response times Pod restarts and scheduling Resource usage OVN health Overall cluster operator health A good rule to follow is that if you decide that a metric is important, there should be an alert for it. Note You can check the available metrics by running the following command: USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -qsk http://localhost:9090/api/v1/metadata | jq '.data 17.3.1.2.1. Example queries in PromQL The following tables show some queries that you can explore in the metrics query browser using the OpenShift Container Platform console. Note The URL for the console is https://<OpenShift Console FQDN>/monitoring/query-browser. You can get the OpenShift Console FQDN by running the following command: USD oc get routes -n openshift-console console -o jsonpath='{.status.ingress[0].host}' Table 17.1. Node memory & CPU usage Metric Query CPU % requests by node sum by (node) (sum_over_time(kube_pod_container_resource_requests{resource="cpu"}[60m]))/sum by (node) (sum_over_time(kube_node_status_allocatable{resource="cpu"}[60m])) *100 Overall cluster CPU % utilization sum by (managed_cluster) (sum_over_time(kube_pod_container_resource_requests{resource="memory"}[60m]))/sum by (managed_cluster) (sum_over_time(kube_node_status_allocatable{resource="cpu"}[60m])) *100 Memory % requests by node sum by (node) (sum_over_time(kube_pod_container_resource_requests{resource="memory"}[60m]))/sum by (node) (sum_over_time(kube_node_status_allocatable{resource="memory"}[60m])) *100 Overall cluster memory % utilization (1-(sum by (managed_cluster)(avg_over_time node_memory_MemAvailable_bytes[60m] ))/sum by (managed_cluster)(avg_over_time(kube_node_status_allocatable{resource="memory"}[60m])))*100 Table 17.2. API latency by verb Metric Query GET histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver=~"kube-apiserver|openshift-apiserver", verb="GET"}[60m]))) PATCH histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver|openshift-apiserver", verb="PATCH"}[60m]))) POST histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver|openshift-apiserver", verb="POST"}[60m]))) LIST histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver|openshift-apiserver", verb="LIST"}[60m]))) PUT histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver|openshift-apiserver", verb="PUT"}[60m]))) DELETE histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver|openshift-apiserver", verb="DELETE"}[60m]))) Combined histogram_quantile(0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver=~"(openshift-apiserver|kube-apiserver)", verb!="WATCH"}[60m]))) Table 17.3. etcd Metric Query fsync 99th percentile latency (per instance) histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m])) fsync 99th percentile latency (per cluster) sum by (managed_cluster) ( histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[60m]))) Leader elections sum(rate(etcd_server_leader_changes_seen_total[1440m])) Network latency histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[5m])) Table 17.4. Operator health Metric Query Degraded operators sum by (managed_cluster, name) (avg_over_time(cluster_operator_conditions{condition="Degraded", name!="version"}[60m])) Total degraded operators per cluster sum by (managed_cluster) (avg_over_time(cluster_operator_conditions{condition="Degraded", name!="version"}[60m] )) 17.3.1.2.2. Recommendations for storage of metrics Out of the box, Prometheus does not back up saved metrics with persistent storage. If you restart the Prometheus pods, all metrics data are lost. You should configure the monitoring stack to use the back-end storage that is available on the platform. To meet the high IO demands of Prometheus you should use local storage. For Telco core clusters, you can use the Local Storage Operator for persistent storage for Prometheus. Red Hat OpenShift Data Foundation (ODF), which deploys a ceph cluster for block, file, and object storage, is also a suitable candidate for a Telco core cluster. To keep system resource requirements low on a RAN single-node OpenShift or far edge cluster, you should not provision backend storage for the monitoring stack. Such clusters forward all metrics to the hub cluster where you can provision a third party monitoring platform. Additional resources Accessing metrics as an administrator Persistent storage using local volumes Cluster tuning reference CRs 17.3.1.3. Monitoring the edge Single-node OpenShift at the edge keeps the footprint of the platform components to a minimum. The following procedure is an example of how you can configure a single-node OpenShift node with a small monitoring footprint. Prerequisites For environments that use Red Hat Advanced Cluster Management (RHACM), you have enabled the Observability service. The hub cluster is running Red Hat OpenShift Data Foundation (ODF). Procedure Create a ConfigMap CR, and save it as monitoringConfigMap.yaml , as in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h On the single-node OpenShift, apply the ConfigMap CR by running the following command: USD oc apply -f monitoringConfigMap.yaml Create a NameSpace CR, and save it as monitoringNamespace.yaml , as in the following example: apiVersion: v1 kind: Namespace metadata: name: open-cluster-management-observability On the hub cluster, apply the Namespace CR on the hub cluster by running the following command: USD oc apply -f monitoringNamespace.yaml Create an ObjectBucketClaim CR, and save it as monitoringObjectBucketClaim.yaml , as in the following example: apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: multi-cloud-observability namespace: open-cluster-management-observability spec: storageClassName: openshift-storage.noobaa.io generateBucketName: acm-multi On the hub cluster, apply the ObjectBucketClaim CR, by running the following command: USD oc apply -f monitoringObjectBucketClaim.yaml Create a Secret CR, and save it as monitoringSecret.yaml , as in the following example: apiVersion: v1 kind: Secret metadata: name: multiclusterhub-operator-pull-secret namespace: open-cluster-management-observability stringData: .dockerconfigjson: 'PULL_SECRET' On the hub cluster, apply the Secret CR by running the following command: USD oc apply -f monitoringSecret.yaml Get the keys for the NooBaa service and the backend bucket name from the hub cluster by running the following commands: USD NOOBAA_ACCESS_KEY=USD(oc get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') USD NOOBAA_SECRET_KEY=USD(oc get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') USD OBJECT_BUCKET=USD(oc get objectbucketclaim -n open-cluster-management-observability multi-cloud-observability -o json | jq -r .spec.bucketName) Create a Secret CR for bucket storage and save it as monitoringBucketSecret.yaml , as in the following example: apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: USD{OBJECT_BUCKET} endpoint: s3.openshift-storage.svc insecure: true access_key: USD{NOOBAA_ACCESS_KEY} secret_key: USD{NOOBAA_SECRET_KEY} On the hub cluster, apply the Secret CR by running the following command: USD oc apply -f monitoringBucketSecret.yaml Create the MultiClusterObservability CR and save it as monitoringMultiClusterObservability.yaml , as in the following example: apiVersion: observability.open-cluster-management.io/v1beta2 kind: MultiClusterObservability metadata: name: observability spec: advanced: retentionConfig: blockDuration: 2h deleteDelay: 48h retentionInLocal: 24h retentionResolutionRaw: 3d enableDownsampling: false observabilityAddonSpec: enableMetrics: true interval: 300 storageConfig: alertmanagerStorageSize: 10Gi compactStorageSize: 100Gi metricObjectStorage: key: thanos.yaml name: thanos-object-storage receiveStorageSize: 25Gi ruleStorageSize: 10Gi storeStorageSize: 25Gi On the hub cluster, apply the MultiClusterObservability CR by running the following command: USD oc apply -f monitoringMultiClusterObservability.yaml Verification Check the routes and pods in the namespace to validate that the services have deployed on the hub cluster by running the following command: USD oc get routes,pods -n open-cluster-management-observability Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/alertmanager alertmanager-open-cluster-management-observability.cloud.example.com /api/v2 alertmanager oauth-proxy reencrypt/Redirect None route.route.openshift.io/grafana grafana-open-cluster-management-observability.cloud.example.com grafana oauth-proxy reencrypt/Redirect None 1 route.route.openshift.io/observatorium-api observatorium-api-open-cluster-management-observability.cloud.example.com observability-observatorium-api public passthrough/None None route.route.openshift.io/rbac-query-proxy rbac-query-proxy-open-cluster-management-observability.cloud.example.com rbac-query-proxy https reencrypt/Redirect None NAME READY STATUS RESTARTS AGE pod/observability-alertmanager-0 3/3 Running 0 1d pod/observability-alertmanager-1 3/3 Running 0 1d pod/observability-alertmanager-2 3/3 Running 0 1d pod/observability-grafana-685b47bb47-dq4cw 3/3 Running 0 1d <...snip...> pod/observability-thanos-store-shard-0-0 1/1 Running 0 1d pod/observability-thanos-store-shard-1-0 1/1 Running 0 1d pod/observability-thanos-store-shard-2-0 1/1 Running 0 1d 1 A dashboard is accessible at the grafana route listed. You can use this to view metrics across all managed clusters. For more information on observability in Red Hat Advanced Cluster Management, see Observability . 17.3.1.4. Alerting OpenShift Container Platform includes a large number of alert rules, which can change from release to release. 17.3.1.4.1. Viewing default alerts Use the following procedure to review all of the alert rules in a cluster. Procedure To review all the alert rules in a cluster, you can run the following command: USD oc get cm -n openshift-monitoring prometheus-k8s-rulefiles-0 -o yaml Rules can include a description and provide a link to additional information and mitigation steps. For example, this is the rule for etcdHighFsyncDurations : - alert: etcdHighFsyncDurations annotations: description: 'etcd cluster "{{ USDlabels.job }}": 99th percentile fsync durations are {{ USDvalue }}s on etcd instance {{ USDlabels.instance }}.' runbook_url: https://github.com/openshift/runbooks/blob/master/alerts/cluster-etcd-operator/etcdHighFsyncDurations.md summary: etcd cluster 99th percentile fsync durations are too high. expr: | histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=~".*etcd.*"}[5m])) > 1 for: 10m labels: severity: critical 17.3.1.4.2. Alert notifications You can view alerts in the OpenShift Container Platform console, however an administrator should configure an external receiver to forward the alerts to. OpenShift Container Platform supports the following receiver types: PagerDuty: a 3rd party incident response platform Webhook: an arbitrary API endpoint that receives an alert via a POST request and can take any necessary action Email: sends an email to designated address Slack: sends a notification to either a slack channel or an individual user Additional resources Managing alerts 17.3.1.5. Workload monitoring By default, OpenShift Container Platform does not collect metrics for application workloads. You can configure a cluster to collect workload metrics. Prerequisites You have defined endpoints to gather workload metrics on the cluster. Procedure Create a ConfigMap CR and save it as monitoringConfigMap.yaml , as in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 Set to true to enable workload monitoring. Apply the ConfigMap CR by running the following command: USD oc apply -f monitoringConfigMap.yaml Create a ServiceMonitor CR, and save it as monitoringServiceMonitor.yaml , as in the following example: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: ui name: myapp namespace: myns spec: endpoints: 1 - interval: 30s port: ui-http scheme: http path: /healthz 2 selector: matchLabels: app: ui 1 Use endpoints to define workload metrics. 2 Prometheus scrapes the path /metrics by default. You can define a custom path here. Apply the ServiceMonitor CR by running the following command: USD oc apply -f monitoringServiceMonitor.yaml Prometheus scrapes the path /metrics by default, however you can define a custom path. It is up to the vendor of the application to expose this endpoint for scraping, with metrics that they deem relevant. 17.3.1.5.1. Creating a workload alert You can enable alerts for user workloads on a cluster. Procedure Create a ConfigMap CR, and save it as monitoringConfigMap.yaml , as in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 # ... 1 Set to true to enable workload monitoring. Apply the ConfigMap CR by running the following command: USD oc apply -f monitoringConfigMap.yaml Create a YAML file for alerting rules, monitoringAlertRule.yaml , as in the following example: apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: myapp-alert namespace: myns spec: groups: - name: example rules: - alert: InternalErrorsAlert expr: flask_http_request_total{status="500"} > 0 # ... Apply the alert rule by running the following command: USD oc apply -f monitoringAlertRule.yaml Additional resources ServiceMonitor[monitoring.coreos.com/v1 ] Enabling monitoring for user-defined projects Managing alerting rules for user-defined projects 17.4. Security 17.4.1. Security basics Security is a critical component of telecommunications deployments on OpenShift Container Platform, particularly when running cloud-native network functions (CNFs). You can enhance security for high-bandwidth network deployments in telecommunications (telco) environments by following key security considerations. By implementing these standards and best practices, you can strengthen security in telco-specific use cases. 17.4.1.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using the following authorization objects: Rules Are sets of permitted actions on specific objects. For example, a rule can determine whether a user or service account can create pods. Each rule specifies an API resource, the resource within that API, and the allowed action. Roles Are collections of rules that define what actions users or groups can perform. You can associate or bind rules to multiple users or groups. A role file can contain one or more rules that specify the actions and resources allowed for that role. Roles are categorized into the following types: Cluster roles: You can define cluster roles at the cluster level. They are not tied to a single namespace. They can apply across all namespaces or specific namespaces when you bind them to users, groups, or service accounts. Project roles: You can create project roles within a specific namespace, and they only apply to that namespace. You can assign permissions to specific users to create roles and role bindings within their namespace, ensuring they do not affect other namespaces. Bindings Are associations between users and/or groups with a role. You can create a role binding to connect the rules in a role to a specific user ID or group. This brings together the role and the user or group, defining what actions they can perform. Note You can bind more than one role to a user or group. For more information on RBAC, see "Using RBAC to define and apply permissions". Operational RBAC considerations To reduce operational overhead, it is important to manage access through groups rather than handling individual user IDs across multiple clusters. By managing groups at an organizational level, you can streamline access control and simplify administration across your organization. Additional resources Using RBAC to define and apply permissions 17.4.1.2. Security accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. You can use service accounts to apply role-based access control (RBAC) to pods. By assigning service accounts to workloads, such as pods and deployments, you can grant additional permissions, such as pulling from different registries. This also allows you to assign lower privileges to service accounts, reducing the security footprint of the pods that run under them. For more information about service accounts, see "Understanding and creating service accounts". Additional resources Understanding and creating service accounts 17.4.1.3. Identity provider configuration Configuring an identity provider is the first step in setting up users on the cluster. You can manage groups at the organizational level by using an identity provider. The identity provider can pull in specific user groups that are maintained at the organizational level, rather than the cluster level. This allows you to add and remove users from groups that follow your organization's established practices. Note You must set up a cron job to run frequently to pull any changes into the cluster. You can use an identity provider to manage access levels for specific groups within your organization. For example, you can perform the following actions to manage access levels: Assign the cluster-admin role to teams that require cluster-level privileges. Grant application administrators specific privileges to manage only their respective projects. Provide operational teams with view access across the cluster to enable monitoring without allowing modifications. For information about configuring an identity provider, see "Understanding identity provider configuration". Additional resources Understanding identity provider configuration 17.4.1.4. Replacing the kubeadmin user with a cluster-admin user The kubeadmin user with the cluster-admin privileges is created on every cluster by default. To enhance the cluster security, you can replace the`kubeadmin` user with a cluster-admin user and then disable or remove the kubeadmin user. Prerequisites You have created a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). You have administrative access to a virtual vault for secure storage. Procedure Create an emergency cluster-admin user by using the htpasswd identity provider. For more information, see "About htpasswd authentication". Assign the cluster-admin privileges to the new user by running the following command: USD oc adm policy add-cluster-role-to-user cluster-admin <emergency_user> Verify the emergency user access: Log in to the cluster using the new emergency user. Confirm that the user has cluster-admin privileges by running the following command: USD oc whoami Ensure the output shows the emergency user's ID. Store the password or authentication key for the emergency user securely in a virtual vault. Note Follow the best practices of your organization for securing sensitive credentials. Disable or remove the kubeadmin user to reduce security risks by running the following command: USD oc delete secrets kubeadmin -n kube-system Additional resources About htpasswd authentication 17.4.1.5. Security considerations for telco CNFs Telco workloads handle vast amounts of sensitive data and demand high reliability. A single security vulnerability can lead to broader cluster-wide compromises. With numerous components running on a single-node OpenShift cluster, each component must be secured to prevent any breach from escalating. Ensuring security across the entire infrastructure, including all components, is essential to maintaining the integrity of the telco network and avoiding vulnerabilities. The following key security features are essential for telco: Security Context Constraints (SCCs): Provide granular control over pod security in the OpenShift clusters. Pod Security Admission (PSA): Kubernetes-native pod security controls. Encryption: Ensures data confidentiality in high-throughput network environments. 17.4.1.6. Advancement of pod security in Kubernetes and OpenShift Container Platform Kubernetes initially had limited pod security. When OpenShift Container Platform integrated Kubernetes, Red Hat added pod security through Security Context Constraints (SCCs). In Kubernetes version 1.3, PodSecurityPolicy (PSP) was introduced as a similar feature. However, Pod Security Admission (PSA) was introduced in Kubernetes version 1.21, which resulted in the deprecation of PSP in Kubernetes version 1.25. PSA also became available in OpenShift Container Platform version 4.11. While PSA improves pod security, it lacks some features provided by SCCs that are still necessary for telco use cases. Therefore, OpenShift Container Platform continues to support both PSA and SCCs. 17.4.1.7. Key areas for CNF deployment The cloud-native network function (CNF) deployment contains the following key areas: Core The first deployments of CNFs occurred in the core of the wireless network. Deploying CNFs in the core typically means racks of servers placed in central offices or data centers. These servers are connected to both the internet and the Radio Access Network (RAN), but they are often behind multiple security firewalls or sometimes disconnected from the internet altogether. This type of setup is called an offline or disconnected cluster. RAN After CNFs were successfully tested in the core network and found to be effective, they were considered for deployment in the Radio Access Network (RAN). Deploying CNFs in RAN requires a large number of servers (up to 100,000 in a large deployment). These servers are located near cellular towers and typically run as single-node OpenShift clusters, with the need for high scalability. 17.4.1.8. Telco-specific infrastructure Hardware requirements In telco networks, clusters are primarily built on bare-metal hardware. This means that the operating system (op-system-first) is installed directly on the physical machines, without using virtual machines. This reduces network connectivity complexity, minimizes latency, and optimizes CPU usage for applications. Network requirements Telco networks require much higher bandwidth compared to standard IT networks. Telco networks commonly use dual-port 25 GB connections or 100 GB Network Interface Cards (NICs) to handle massive data throughput. Security is critical, requiring encrypted connections and secure endpoints to protect sensitive personal data. 17.4.1.9. Lifecycle management Upgrades are critical for security. When a vulnerability is discovered, it is patched in the latest z-stream release. This fix is then rolled back through each lower y-stream release until all supported versions are patched. Releases that are no longer supported do not receive patches. Therefore, it is important to upgrade OpenShift Container Platform clusters regularly to stay within a supported release and ensure they remain protected against vulnerabilities. For more information about lifecycle management and upgrades, see "Upgrading a telco core CNF clusters". Additional resources Upgrading a telco core CNF clusters 17.4.1.10. Evolution of Network Functions to CNFs Network Functions (NFs) began as Physical Network Functions (PNFs), which were purpose-built hardware devices operating independently. Over time, PNFs evolved into Virtual Network Functions (VNFs), which virtualized their capabilities while controlling resources like CPU, memory, storage, and network. As technology advanced further, VNFs transitioned to cloud-native network functions (CNFs). CNFs run in lightweight, secure, and scalable containers. They enforce stringent restrictions, including non-root execution and minimal host interference, to enhance security and performance. PNFs had unrestricted root access to operate independently without interference. With the shift to VNFs, resource usage was controlled, but processes could still run as root within their virtual machines. In contrast, CNFs restrict root access and limit container capabilities to prevent interference with other containers or the host operating system. The main challenges in migrating to CNFs are as follows: Breaking down monolithic network functions into smaller, containerized processes. Adhering to cloud-native principles, such as non-root execution and isolation, while maintaining telco-grade performance and reliability. 17.4.2. Host security 17.4.2.1. Red Hat Enterprise Linux CoreOS (RHCOS) Red Hat Enterprise Linux CoreOS (RHCOS) is different from Red Hat Enterprise Linux (RHEL) in key areas. For more information, see "About RHCOS". From a telco perspective, a major distinction is the control of rpm-ostree , which is updated through the Machine Config Operator. RHCOS follows the same immutable design used for pods in OpenShift Container Platform. This ensures that the operating system remains consistent across the cluster. For information about RHCOS architecture, see "Red Hat Enterprise Linux CoreOS (RHCOS)". To manage hosts effectively while maintaining security, avoid direct access whenever possible. Instead, you can use the following methods for host management: Debug pod Direct SSHs Console access Review the following RHCOS secruity mechanisms that are integral to maintaining host security: Linux namespaces Provide isolation for processes and resources. Each container keeps its processes and files within its own namespace. If a user escapes from the container namespace, they could gain access to the host operating system, potentially compromising security. Security-Enhanced Linux (SELinux) Enforces mandatory access controls to restrict access to files and directories by processes. It adds an extra layer of security by preventing unauthorized access to files if a process tries to break its confinement. SELinux follows the security policy of denying everything unless explicitly allowed. If a process attempts to modify or access a file without permission, SELinux denies access. For more information, see Introduction to SELinux . Linux capabilities Assign specific privileges to processes at a granular level, minimizing the need for full root permissions. For more information, see "Linux capabilities". Control groups (cgroups) Allocate and manage system resources, such as CPU and memory for processes and containers, ensuring efficient usage. As of OpenShift Container Platform 4.16, there are two versions of cgroups. cgroup v2 is now configured by default. CRI-O Serves as a lightweight container runtime that enforces security boundaries and manages container workloads. Additional resources About RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) . Linux capabilities . 17.4.2.2. Command-line host access Direct access to a host must be restricted to avoid modifying the host or accessing pods that should not be accessed. For users who need direct access to a host, it is recommended to use an external authenticator, like SSSD with LDAP, to manage access. This helps maintain consistency across the cluster through the Machine Config Operator. Important Do not configure direct access to the root ID on any OpenShift Container Platform cluster server. You can connect to a node in the cluster using the following methods: Using debug pod This is the recommended method to access a node. To debug or connect to a node, run the following command: USD oc debug node/<worker_node_name> After connecting to the node, run the following command to get access to the root file system: # chroot /host This gives you root access within a debug pod on the node. For more information, see "Starting debug pods with root access". Direct SSH Avoid using the root user. Instead, use the core user ID (or your own ID). To connect to the node using SSH, run the following command: USD ssh core@<worker_node_name> Important The core user ID is initially given sudo privileges within the cluster. If you cannot connect to a node using SSH, see How to connect to OpenShift Container Platform 4.x Cluster nodes using SSH bastion pod to add your SSH key to the core user. After connecting to the node using SSH, run the following command to get access to the root shell: USD sudo -i Console Access Ensure that consoles are secure. Do not allow direct login with the root ID, instead use individual IDs. Note Follow the best practices of your organization for securing console access. Additional resources Starting debug pods with root access . 17.4.2.3. Linux capabilities Linux capabilities define the actions a process can perform on the host system. By default, pods are granted several capabilities unless security measures are applied. These default capabilities are as follows: CHOWN DAC_OVERRIDE FSETID FOWNER SETGID SETUID SETPCAP NET_BIND_SERVICE KILL You can modify which capabilities that a pod can receive by configuring Security Context Constraints (SCCs). Important You must not assign the following capabilities to a pod: SYS_ADMIN : A powerful capability that grants elevated privileges. Allowing this capability can break security boundaries and pose a significant security risk. NET_ADMIN : Allows control over networking, like SR-IOV ports, but can be replaced with alternative solutions in modern setups. For more information about Linux capabilities, see Linux capabilities man page. 17.4.3. Security context constraints Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run. Security context constraints allow an administrator to control the following security constraints: Whether a pod can run privileged containers with the allowPrivilegedContainer flag Whether a pod is constrained with the allowPrivilegeEscalation flag The capabilities that a container can request The use of host directories as volumes The SELinux context of the container The container user ID The use of host namespaces and networking The allocation of an FSGroup that owns the pod volumes The configuration of allowable supplemental groups Whether a container requires write access to its root file system The usage of volume types The configuration of allowable seccomp profiles Default SCCs are created during installation and when you install some Operators or other components. As a cluster administrator, you can also create your own SCCs by using the OpenShift CLI ( oc ). For information about default security context constraints, see Default security context constraints . Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . You can use the following basic SCCs: restricted restricted-v2 The restricted-v2 SCC is the most restrictive SCC provided by a new installation and is used by default for authenticated users. It aligns with Pod Security Admission (PSA) restrictions and improves security, as the original restricted SCC is less restrictive. It also helps transition from the original SCCs to v2 across multiple releases. Eventually, the original SCCs get deprecated. Therefore, it is recommended to use the restricted-v2 SCC. You can examine the restricted-v2 SCC by running the following command: USD oc describe scc restricted-v2 Example output Name: restricted-v2 Priority: <none> Access: Users: <none> Groups: <none> Settings: Allow Privileged: false Allow Privilege Escalation: false Default Add Capabilities: <none> Required Drop Capabilities: ALL Allowed Capabilities: NET_BIND_SERVICE Allowed Seccomp Profiles: runtime/default Allowed Volume Types: configMap,downwardAPI,emptyDir,ephemeral,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none> The restricted-v2 SCC explicitly denies everything except what it explicitly allows. The following settings define the allowed capabilities and security restrictions: Default add capabilities: Set to <none> . It means that no capabilities are added to a pod by default. Required drop capabilities: Set to ALL . This drops all the default Linux capabilities of a pod. Allowed capabilities: NET_BIND_SERVICE . A pod can request this capability, but it is not added by default. Allowed seccomp profiles: runtime/default . For more information, see Managing security context constraints . | [
"oc adm upgrade",
"Cluster version is 4.14.34 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.14, candidate-4.15, eus-4.14, eus-4.16, fast-4.14, fast-4.15, stable-4.14, stable-4.15) Recommended updates: VERSION IMAGE 4.14.37 quay.io/openshift-release-dev/ocp-release@sha256:14e6ba3975e6c73b659fa55af25084b20ab38a543772ca70e184b903db73092b 4.14.36 quay.io/openshift-release-dev/ocp-release@sha256:4bc4925e8028158e3f313aa83e59e181c94d88b4aa82a3b00202d6f354e8dfed 4.14.35 quay.io/openshift-release-dev/ocp-release@sha256:883088e3e6efa7443b0ac28cd7682c2fdbda889b576edad626769bf956ac0858",
"oc get clusterversion -o=jsonpath='{.items[*].spec}' | jq",
"{ \"channel\": \"stable-4.14\", \"clusterID\": \"01eb9a57-2bfb-4f50-9d37-dc04bd5bac75\" }",
"oc adm upgrade channel eus-4.16",
"oc get clusterversion -o=jsonpath='{.items[*].spec}' | jq",
"{ \"channel\": \"eus-4.16\", \"clusterID\": \"01eb9a57-2bfb-4f50-9d37-dc04bd5bac75\" }",
"oc adm upgrade channel fast-4.16",
"oc adm upgrade",
"Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.28 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6958394 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.15, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:6618dd3c0f5 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:7a72abc3 4.16.12 quay.io/openshift-release-dev/ocp-release@sha256:1c8359fc2 4.16.11 quay.io/openshift-release-dev/ocp-release@sha256:bc9006febfe 4.16.10 quay.io/openshift-release-dev/ocp-release@sha256:dece7b61b1 4.15.36 quay.io/openshift-release-dev/ocp-release@sha256:c31a56d19 4.15.35 quay.io/openshift-release-dev/ocp-release@sha256:f21253 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:2dd69c5",
"oc adm upgrade channel stable-4.15",
"oc adm upgrade",
"Cluster version is 4.14.34 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.27 and therefore OpenShift 4.15 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6958394 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.15 (available channels: candidate-4.14, candidate-4.15, eus-4.14, eus-4.15, fast-4.14, fast-4.15, stable-4.14, stable-4.15) Recommended updates: VERSION IMAGE 4.15.33 quay.io/openshift-release-dev/ocp-release@sha256:7142dd4b560 4.15.32 quay.io/openshift-release-dev/ocp-release@sha256:cda8ea5b13dc9 4.15.31 quay.io/openshift-release-dev/ocp-release@sha256:07cf61e67d3eeee 4.15.30 quay.io/openshift-release-dev/ocp-release@sha256:6618dd3c0f5 4.15.29 quay.io/openshift-release-dev/ocp-release@sha256:7a72abc3 4.15.28 quay.io/openshift-release-dev/ocp-release@sha256:1c8359fc2 4.15.27 quay.io/openshift-release-dev/ocp-release@sha256:bc9006febfe 4.15.26 quay.io/openshift-release-dev/ocp-release@sha256:dece7b61b1 4.14.38 quay.io/openshift-release-dev/ocp-release@sha256:c93914c62d7 4.14.37 quay.io/openshift-release-dev/ocp-release@sha256:c31a56d19 4.14.36 quay.io/openshift-release-dev/ocp-release@sha256:f21253 4.14.35 quay.io/openshift-release-dev/ocp-release@sha256:2dd69c5",
"oc get csv -A",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE gitlab-operator-kubernetes.v0.17.2 GitLab 0.17.2 gitlab-operator-kubernetes.v0.17.1 Succeeded openshift-operator-lifecycle-manager packageserver Package Server 0.19.0 Succeeded",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bere83 True False False 3 3 3 0 25d worker rendered-worker-245c4f True False False 2 2 2 0 25d",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 39d v1.27.15+6147456 worker-0 Ready worker 39d v1.27.15+6147456 worker-1 Ready worker 39d v1.27.15+6147456",
"oc label node worker-0 node-role.kubernetes.io/mcp-1=",
"oc label node worker-1 node-role.kubernetes.io/mcp-2=",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 39d v1.27.15+6147456 worker-0 Ready mcp-1,worker 39d v1.27.15+6147456 worker-1 Ready mcp-2,worker 39d v1.27.15+6147456",
"--- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-2] } nodeSelector: matchLabels: node-role.kubernetes.io/mcp-2: \"\" --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-1] } nodeSelector: matchLabels: node-role.kubernetes.io/mcp-1: \"\"",
"oc apply -f mcps.yaml",
"machineconfigpool.machineconfiguration.openshift.io/mcp-2 created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-be3e83 True False False 3 3 3 0 25d mcp-1 rendered-mcp-1-2f4c4f False True True 1 0 0 0 10s mcp-2 rendered-mcp-2-2r4s1f False True True 1 0 0 0 10s worker rendered-worker-23fc4f False True True 0 0 0 2 25d",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-be3e83 True False False 3 3 3 0 25d mcp-1 rendered-mcp-1-2f4c4f True False False 1 1 1 0 7m33s mcp-2 rendered-mcp-2-2r4s1f True False False 1 1 1 0 51s worker rendered-worker-23fc4f True False False 0 0 0 0 25d",
"oc get pods -A | grep -E -vi 'complete|running'",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 32d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 32d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 32d v1.27.15+6147456 worker-0 Ready mcp-1,worker 32d v1.27.15+6147456 worker-1 Ready mcp-2,worker 32d v1.27.15+6147456",
"oc get bmh -n openshift-machine-api",
"NAME STATE CONSUMER ONLINE ERROR AGE ctrl-plane-0 unmanaged cnf-58879-master-0 true 33d ctrl-plane-1 unmanaged cnf-58879-master-1 true 33d ctrl-plane-2 unmanaged cnf-58879-master-2 true 33d worker-0 unmanaged cnf-58879-worker-0-45879 true 33d worker-1 progressing cnf-58879-worker-0-dszsh false 1d 1",
"oc get co",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 17h baremetal 4.14.34 True False False 32d service-ca 4.14.34 True False False 32d storage 4.14.34 True False False 32d",
"oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc patch mcp/mcp-2 --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 true mcp-2 true",
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"oc get co",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d22h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d22h config-operator 4.14.34 True False False 4d22h console 4.14.34 True False False 4d22h service-ca 4.14.34 True False False 4d22h storage 4.14.34 True False False 4d22h",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d22h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d22h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d22h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d22h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d22h v1.27.15+6147456",
"oc get po -A | grep -E -iv 'running|complete'",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-<update_version_from>-kube-<kube_api_version>-api-removals-in-<update_version_to>\":\"true\"}}' --type=merge",
"oc get configmap admin-acks -n openshift-config -o json | jq .data",
"{ \"ack-4.14-kube-1.28-api-removals-in-4.15\": \"true\", \"ack-4.15-kube-1.29-api-removals-in-4.16\": \"true\" }",
"oc adm upgrade --to=4.15.33",
"Requested update to 4.15.33 1",
"watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.34 True True 4m6s Working towards 4.15.33: 111 of 873 done (12% complete), waiting on kube-apiserver NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d23h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d23h console 4.14.34 True False False 4d22h storage 4.14.34 True False False 4d23h config-operator 4.15.33 True False False 4d23h etcd 4.15.33 True False False 4d23h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d23h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d23h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d23h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-marketplace redhat-marketplace-rf86t 0/1 ContainerCreating 0 0s",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True False 28m Cluster version is 4.15.33 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d baremetal 4.15.33 True False False 5d cloud-controller-manager 4.15.33 True False False 5d1h cloud-credential 4.15.33 True False False 5d1h cluster-autoscaler 4.15.33 True False False 5d config-operator 4.15.33 True False False 5d console 4.15.33 True False False 5d service-ca 4.15.33 True False False 5d storage 4.15.33 True False False 5d NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d v1.28.13+2ca1a23 worker-1 Ready mcp-2,worker 5d v1.28.13+2ca1a23",
"oc get installplan -A | grep -E 'APPROVED|false'",
"NAMESPACE NAME CSV APPROVAL APPROVED metallb-system install-nwjnh metallb-operator.v4.16.0-202409202304 Manual false openshift-nmstate install-5r7wr kubernetes-nmstate-operator.4.16.0-202409251605 Manual false",
"oc patch installplan -n metallb-system install-nwjnh --type merge --patch '{\"spec\":{\"approved\":true}}'",
"installplan.operators.coreos.com/install-nwjnh patched",
"oc get all -n metallb-system",
"NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 0/1 ContainerCreating 0 4s pod/metallb-operator-controller-manager-77895bdb46-bqjdx 1/1 Running 0 4m1s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 0/1 ContainerCreating 0 4s pod/metallb-operator-webhook-server-d76f9c6c8-57r4w 1/1 Running 0 4m1s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 0 4s replicaset.apps/metallb-operator-controller-manager-77895bdb46 1 1 1 4m1s replicaset.apps/metallb-operator-controller-manager-99b76f88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 0 4s replicaset.apps/metallb-operator-webhook-server-6f7dbfdb88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 1 1 1 4m1s",
"NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 1/1 Running 0 25s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 1/1 Running 0 25s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 1 25s replicaset.apps/metallb-operator-controller-manager-77895bdb46 0 0 0 4m22s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 1 25s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 0 0 0 4m22s",
"oc get installplan -A | grep -E 'APPROVED|false'",
"oc adm upgrade",
"Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.29 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/7031404 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: eus-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:0521a0f1acd2d1b77f76259cb9bae9c743c60c37d9903806a3372c1414253658 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:6078cb4ae197b5b0c526910363b8aff540343bfac62ecb1ead9e068d541da27b 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:f2e0c593f6ed81250c11d0bac94dbaf63656223477b7e8693a652f933056af6e",
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.29 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/7031404 for details and instructions. Upstream is unset, so the cluster will use an appropriate default.Channel: eus-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:0521a0f1acd2d1b77f76259cb9bae9c743c60c37d9903806a3372c1414253658 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:6078cb4ae197b5b0c526910363b8aff540343bfac62ecb1ead9e068d541da27b 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:f2e0c593f6ed81250c11d0bac94dbaf63656223477b7e8693a652f933056af6e Supported but not recommended updates: Version: 4.16.15 Image: quay.io/openshift-release-dev/ocp-release@sha256:671bc35e Recommended: Unknown Reason: EvaluationFailed Message: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.15-kube-1.29-api-removals-in-4.16\":\"true\"}}' --type=merge",
"configmap/admin-acks patched",
"oc adm upgrade --to=4.16.14",
"Requested update to 4.16.14",
"oc adm upgrade --to=4.16.15",
"error: the update 4.16.15 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=Unknown risk and to proceed with update use --allow-not-recommended. Reason: EvaluationFailed Message: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461",
"oc adm upgrade --to=4.16.15 --allow-not-recommended",
"warning: with --allow-not-recommended you have accepted the risks with 4.14.11 and bypassed Recommended=Unknown EvaluationFailed: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461 Requested update to 4.16.15",
"watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.15; oc get co | grep 4.16; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True True 10m Working towards 4.16.14: 132 of 903 done (14% complete), waiting on kube-controller-manager, kube-scheduler NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d3h baremetal 4.15.33 True False False 5d4h cloud-controller-manager 4.15.33 True False False 5d4h cloud-credential 4.15.33 True False False 5d4h cluster-autoscaler 4.15.33 True False False 5d4h console 4.15.33 True False False 5d3h config-operator 4.16.14 True False False 5d4h etcd 4.16.14 True False False 5d4h kube-apiserver 4.16.14 True True False 5d4h NodeInstallerProgressing: 1 node is at revision 15; 2 nodes are at revision 17 NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d4h v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d4h v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d4h v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d4h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d4h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-kube-apiserver kube-apiserver-ctrl-plane-0 0/5 Pending 0 <invalid>",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 123m Cluster version is 4.16.14 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d6h baremetal 4.16.14 True False False 5d7h cloud-controller-manager 4.16.14 True False False 5d7h cloud-credential 4.16.14 True False False 5d7h cluster-autoscaler 4.16.14 True False False 5d7h config-operator 4.16.14 True False False 5d7h console 4.16.14 True False False 5d6h # operator-lifecycle-manager-packageserver 4.16.14 True False False 5d7h service-ca 4.16.14 True False False 5d7h storage 4.16.14 True False False 5d7h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d7h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d7h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d7h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d7h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d7h v1.27.15+6147456",
"watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"",
"oc get installplan -A | grep -E 'APPROVED|false'",
"oc patch installplan -n metallb-system install-nwjnh --type merge --patch '{\"spec\":{\"approved\":true}}'",
"oc get all -n metallb-system",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 true mcp-2 true",
"oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":false}}'",
"machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 false mcp-2 true",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 false mcp-2 false",
"oc get co",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h",
"oc get po -A | grep -E -iv 'complete|running'",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-<update_version_from>-kube-<kube_api_version>-api-removals-in-<update_version_to>\":\"true\"}}' --type=merge",
"oc get configmap admin-acks -n openshift-config -o json | jq .data",
"{ \"ack-4.14-kube-1.28-api-removals-in-4.15\": \"true\", \"ack-4.15-kube-1.29-api-removals-in-4.16\": \"true\" }",
"oc adm upgrade --to=4.15.33",
"Requested update to 4.15.33 1",
"watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.34 True True 4m6s Working towards 4.15.33: 111 of 873 done (12% complete), waiting on kube-apiserver NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d23h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d23h console 4.14.34 True False False 4d22h storage 4.14.34 True False False 4d23h config-operator 4.15.33 True False False 4d23h etcd 4.15.33 True False False 4d23h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d23h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d23h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d23h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-marketplace redhat-marketplace-rf86t 0/1 ContainerCreating 0 0s",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True False 28m Cluster version is 4.15.33 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d baremetal 4.15.33 True False False 5d cloud-controller-manager 4.15.33 True False False 5d1h cloud-credential 4.15.33 True False False 5d1h cluster-autoscaler 4.15.33 True False False 5d config-operator 4.15.33 True False False 5d console 4.15.33 True False False 5d service-ca 4.15.33 True False False 5d storage 4.15.33 True False False 5d NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d v1.28.13+2ca1a23 worker-1 Ready mcp-2,worker 5d v1.28.13+2ca1a23",
"oc get installplan -A | grep -E 'APPROVED|false'",
"NAMESPACE NAME CSV APPROVAL APPROVED metallb-system install-nwjnh metallb-operator.v4.16.0-202409202304 Manual false openshift-nmstate install-5r7wr kubernetes-nmstate-operator.4.16.0-202409251605 Manual false",
"oc patch installplan -n metallb-system install-nwjnh --type merge --patch '{\"spec\":{\"approved\":true}}'",
"installplan.operators.coreos.com/install-nwjnh patched",
"oc get all -n metallb-system",
"NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 0/1 ContainerCreating 0 4s pod/metallb-operator-controller-manager-77895bdb46-bqjdx 1/1 Running 0 4m1s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 0/1 ContainerCreating 0 4s pod/metallb-operator-webhook-server-d76f9c6c8-57r4w 1/1 Running 0 4m1s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 0 4s replicaset.apps/metallb-operator-controller-manager-77895bdb46 1 1 1 4m1s replicaset.apps/metallb-operator-controller-manager-99b76f88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 0 4s replicaset.apps/metallb-operator-webhook-server-6f7dbfdb88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 1 1 1 4m1s",
"NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 1/1 Running 0 25s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 1/1 Running 0 25s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 1 25s replicaset.apps/metallb-operator-controller-manager-77895bdb46 0 0 0 4m22s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 1 25s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 0 0 0 4m22s",
"oc get installplan -A | grep -E 'APPROVED|false'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 true mcp-2 true",
"oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":false}}'",
"machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 false mcp-2 true",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 false mcp-2 false",
"oc get co",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h",
"oc get po -A | grep -E -iv 'complete|running'",
"oc adm upgrade --to=4.15.33",
"Requested update to 4.15.33 1",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 true mcp-2 true",
"oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":false}}'",
"machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 false mcp-2 true",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d",
"oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker",
"MCP Paused --- ------ master false mcp-1 false mcp-2 false",
"oc get co",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h",
"oc get po -A | grep -E -iv 'complete|running'",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc project <project_name>",
"oc get clusterversion,clusteroperator,node",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.16.11 True False 62d Cluster version is 4.16.11 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.16.11 True False False 62d clusteroperator.config.openshift.io/baremetal 4.16.11 True False False 62d clusteroperator.config.openshift.io/cloud-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/cloud-credential 4.16.11 True False False 62d clusteroperator.config.openshift.io/cluster-autoscaler 4.16.11 True False False 62d clusteroperator.config.openshift.io/config-operator 4.16.11 True False False 62d clusteroperator.config.openshift.io/console 4.16.11 True False False 62d clusteroperator.config.openshift.io/control-plane-machine-set 4.16.11 True False False 62d clusteroperator.config.openshift.io/csi-snapshot-controller 4.16.11 True False False 62d clusteroperator.config.openshift.io/dns 4.16.11 True False False 62d clusteroperator.config.openshift.io/etcd 4.16.11 True False False 62d clusteroperator.config.openshift.io/image-registry 4.16.11 True False False 55d clusteroperator.config.openshift.io/ingress 4.16.11 True False False 62d clusteroperator.config.openshift.io/insights 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-apiserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-scheduler 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-storage-version-migrator 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-api 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-approver 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-config 4.16.11 True False False 62d clusteroperator.config.openshift.io/marketplace 4.16.11 True False False 62d clusteroperator.config.openshift.io/monitoring 4.16.11 True False False 62d clusteroperator.config.openshift.io/network 4.16.11 True False False 62d clusteroperator.config.openshift.io/node-tuning 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-apiserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-samples 4.16.11 True False False 35d clusteroperator.config.openshift.io/operator-lifecycle-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/operator-lifecycle-manager-catalog 4.16.11 True False False 62d clusteroperator.config.openshift.io/operator-lifecycle-manager-packageserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/service-ca 4.16.11 True False False 62d clusteroperator.config.openshift.io/storage 4.16.11 True False False 62d NAME STATUS ROLES AGE VERSION node/ctrl-plane-0 Ready control-plane,master,worker 62d v1.29.7 node/ctrl-plane-1 Ready control-plane,master,worker 62d v1.29.7 node/ctrl-plane-2 Ready control-plane,master,worker 62d v1.29.7",
"oc get pod",
"NAME READY STATUS RESTARTS AGE busybox-1 1/1 Running 168 (34m ago) 7d busybox-2 1/1 Running 119 (9m20s ago) 4d23h busybox-3 1/1 Running 168 (43m ago) 7d busybox-4 1/1 Running 168 (43m ago) 7d",
"oc logs -n <namespace> busybox-1",
"oc describe pod -n <namespace> busybox-1",
"Name: busybox-1 Namespace: busy Priority: 0 Service Account: default Node: worker-3/192.168.0.0 Start Time: Mon, 27 Nov 2023 14:41:25 -0500 Labels: app=busybox pod-template-hash=<hash> Annotations: k8s.ovn.org/pod-networks: ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 41m (x170 over 7d1h) kubelet Container image \"quay.io/quay/busybox:latest\" already present on machine Normal Created 41m (x170 over 7d1h) kubelet Created container busybox Normal Started 41m (x170 over 7d1h) kubelet Started container busybox",
"oc get events -n <namespace> --sort-by=\".metadata.creationTimestamp\" 1",
"oc get events -A --sort-by=\".metadata.creationTimestamp\" 1",
"oc get events -A | grep -Ei \"warning|error\"",
"NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE openshift 59s Warning FailedMount pod/openshift-1 MountVolume.SetUp failed for volume \"v4-0-config-user-idp-0-file-data\" : references non-existent secret key: test",
"oc delete events -n <namespace> --all",
"oc rsh -n <namespace> busybox-1",
"oc get pod",
"NAME READY STATUS RESTARTS AGE busybox-1 1/1 Running 168 (34m ago) 7d busybox-2 1/1 Running 119 (9m20s ago) 4d23h busybox-3 1/1 Running 168 (43m ago) 7d busybox-4 1/1 Running 168 (43m ago) 7d",
"oc debug -n <namespace> busybox-1",
"Starting pod/busybox-1-debug, command was: sleep 3600 Pod IP: 10.133.2.11",
"oc exec -it <pod> -- <command>",
"oc get co",
"oc get po -A | grep -Eiv 'complete|running'",
"oc get events -n openshift-authentication --sort-by='.metadata.creationTimestamp'",
"oc get pod -n openshift-authentication",
"oc logs -n openshift-authentication <pod_name>",
"openssl x509 -enddate -noout -in <cert_file_name>.pem",
"for each in USD(oc get secret -n openshift-etcd | grep \"kubernetes.io/tls\" | grep -e \"etcd-peer\\|etcd-serving\" | awk '{print USD1}'); do oc get secret USDeach -n openshift-etcd -o jsonpath=\"{.data.tls\\.crt}\" | base64 -d | openssl x509 -noout -enddate; done",
"oc patch mcp/<mcp_name> --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc patch mcp/<mcp_name> --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc adm node-logs <node_name> -u crio",
"oc debug node/<node_name>",
"chroot /host",
"You are now logged in as root on the node",
"ssh core@<node_name>",
"oc adm cordon <node_name>",
"oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -qsk http://localhost:9090/api/v1/metadata | jq '.data",
"oc get routes -n openshift-console console -o jsonpath='{.status.ingress[0].host}'",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h",
"oc apply -f monitoringConfigMap.yaml",
"apiVersion: v1 kind: Namespace metadata: name: open-cluster-management-observability",
"oc apply -f monitoringNamespace.yaml",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: multi-cloud-observability namespace: open-cluster-management-observability spec: storageClassName: openshift-storage.noobaa.io generateBucketName: acm-multi",
"oc apply -f monitoringObjectBucketClaim.yaml",
"apiVersion: v1 kind: Secret metadata: name: multiclusterhub-operator-pull-secret namespace: open-cluster-management-observability stringData: .dockerconfigjson: 'PULL_SECRET'",
"oc apply -f monitoringSecret.yaml",
"NOOBAA_ACCESS_KEY=USD(oc get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d')",
"NOOBAA_SECRET_KEY=USD(oc get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d')",
"OBJECT_BUCKET=USD(oc get objectbucketclaim -n open-cluster-management-observability multi-cloud-observability -o json | jq -r .spec.bucketName)",
"apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: USD{OBJECT_BUCKET} endpoint: s3.openshift-storage.svc insecure: true access_key: USD{NOOBAA_ACCESS_KEY} secret_key: USD{NOOBAA_SECRET_KEY}",
"oc apply -f monitoringBucketSecret.yaml",
"apiVersion: observability.open-cluster-management.io/v1beta2 kind: MultiClusterObservability metadata: name: observability spec: advanced: retentionConfig: blockDuration: 2h deleteDelay: 48h retentionInLocal: 24h retentionResolutionRaw: 3d enableDownsampling: false observabilityAddonSpec: enableMetrics: true interval: 300 storageConfig: alertmanagerStorageSize: 10Gi compactStorageSize: 100Gi metricObjectStorage: key: thanos.yaml name: thanos-object-storage receiveStorageSize: 25Gi ruleStorageSize: 10Gi storeStorageSize: 25Gi",
"oc apply -f monitoringMultiClusterObservability.yaml",
"oc get routes,pods -n open-cluster-management-observability",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/alertmanager alertmanager-open-cluster-management-observability.cloud.example.com /api/v2 alertmanager oauth-proxy reencrypt/Redirect None route.route.openshift.io/grafana grafana-open-cluster-management-observability.cloud.example.com grafana oauth-proxy reencrypt/Redirect None 1 route.route.openshift.io/observatorium-api observatorium-api-open-cluster-management-observability.cloud.example.com observability-observatorium-api public passthrough/None None route.route.openshift.io/rbac-query-proxy rbac-query-proxy-open-cluster-management-observability.cloud.example.com rbac-query-proxy https reencrypt/Redirect None NAME READY STATUS RESTARTS AGE pod/observability-alertmanager-0 3/3 Running 0 1d pod/observability-alertmanager-1 3/3 Running 0 1d pod/observability-alertmanager-2 3/3 Running 0 1d pod/observability-grafana-685b47bb47-dq4cw 3/3 Running 0 1d <...snip...> pod/observability-thanos-store-shard-0-0 1/1 Running 0 1d pod/observability-thanos-store-shard-1-0 1/1 Running 0 1d pod/observability-thanos-store-shard-2-0 1/1 Running 0 1d",
"oc get cm -n openshift-monitoring prometheus-k8s-rulefiles-0 -o yaml",
"- alert: etcdHighFsyncDurations annotations: description: 'etcd cluster \"{{ USDlabels.job }}\": 99th percentile fsync durations are {{ USDvalue }}s on etcd instance {{ USDlabels.instance }}.' runbook_url: https://github.com/openshift/runbooks/blob/master/alerts/cluster-etcd-operator/etcdHighFsyncDurations.md summary: etcd cluster 99th percentile fsync durations are too high. expr: | histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=~\".*etcd.*\"}[5m])) > 1 for: 10m labels: severity: critical",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc apply -f monitoringConfigMap.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: ui name: myapp namespace: myns spec: endpoints: 1 - interval: 30s port: ui-http scheme: http path: /healthz 2 selector: matchLabels: app: ui",
"oc apply -f monitoringServiceMonitor.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc apply -f monitoringConfigMap.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: myapp-alert namespace: myns spec: groups: - name: example rules: - alert: InternalErrorsAlert expr: flask_http_request_total{status=\"500\"} > 0",
"oc apply -f monitoringAlertRule.yaml",
"oc adm policy add-cluster-role-to-user cluster-admin <emergency_user>",
"oc whoami",
"oc delete secrets kubeadmin -n kube-system",
"oc debug node/<worker_node_name>",
"chroot /host",
"ssh core@<worker_node_name>",
"sudo -i",
"oc describe scc restricted-v2",
"Name: restricted-v2 Priority: <none> Access: Users: <none> Groups: <none> Settings: Allow Privileged: false Allow Privilege Escalation: false Default Add Capabilities: <none> Required Drop Capabilities: ALL Allowed Capabilities: NET_BIND_SERVICE Allowed Seccomp Profiles: runtime/default Allowed Volume Types: configMap,downwardAPI,emptyDir,ephemeral,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/edge_computing/day-2-operations-for-telco-core-cnf-clusters |
Chapter 68. ListenerAddress schema reference | Chapter 68. ListenerAddress schema reference Used in: ListenerStatus Property Property type Description host string The DNS name or IP address of the Kafka bootstrap service. port integer The port of the Kafka bootstrap service. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-listeneraddress-reference |
Chapter 3. BareMetalHost [metal3.io/v1alpha1] | Chapter 3. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BareMetalHostSpec defines the desired state of BareMetalHost status object BareMetalHostStatus defines the observed state of BareMetalHost 3.1.1. .spec Description BareMetalHostSpec defines the desired state of BareMetalHost Type object Required online Property Type Description automatedCleaningMode string When set to disabled, automated cleaning will be avoided during provisioning and deprovisioning. bmc object How do we connect to the BMC? bootMACAddress string Which MAC address will PXE boot? This is optional for some types, but required for libvirt VMs driven by vbmc. bootMode string Select the method of initializing the hardware during boot. Defaults to UEFI. consumerRef object ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". customDeploy object A custom deploy procedure. description string Description is a human-entered text used to help identify the host externallyProvisioned boolean ExternallyProvisioned means something else is managing the image running on the host and the operator should only manage the power status and hardware inventory inspection. If the Image field is filled in, this field is ignored. firmware object BIOS configuration for bare metal server hardwareProfile string What is the name of the hardware profile for this host? It should only be necessary to set this when inspection cannot automatically determine the profile. image object Image holds the details of the image to be provisioned. metaData object MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. networkData object NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. online boolean Should the server be online? preprovisioningNetworkDataName string PreprovisioningNetworkDataName is the name of the Secret in the local namespace containing network configuration (e.g content of network_data.json) which is passed to the preprovisioning image, and to the Config Drive if not overridden by specifying NetworkData. raid object RAID configuration for bare metal server rootDeviceHints object Provide guidance about how to choose the device for the image being provisioned. taints array Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. userData object UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. 3.1.2. .spec.bmc Description How do we connect to the BMC? Type object Required address credentialsName Property Type Description address string Address holds the URL for accessing the controller on the network. credentialsName string The name of the secret containing the BMC credentials (requires keys "username" and "password"). disableCertificateVerification boolean DisableCertificateVerification disables verification of server certificates when using HTTPS to connect to the BMC. This is required when the server certificate is self-signed, but is insecure because it allows a man-in-the-middle to intercept the connection. 3.1.3. .spec.consumerRef Description ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.4. .spec.customDeploy Description A custom deploy procedure. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.5. .spec.firmware Description BIOS configuration for bare metal server Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.6. .spec.image Description Image holds the details of the image to be provisioned. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image. e.g md5, sha256, sha512 format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.7. .spec.metaData Description MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.8. .spec.networkData Description NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.9. .spec.raid Description RAID configuration for bare metal server Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.10. .spec.rootDeviceHints Description Provide guidance about how to choose the device for the image being provisioned. Type object Property Type Description deviceName string A Linux device name like "/dev/vda". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.11. .spec.taints Description Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. Type array 3.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 3.1.13. .spec.userData Description UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.14. .status Description BareMetalHostStatus defines the observed state of BareMetalHost Type object Required errorCount errorMessage hardwareProfile operationalStatus poweredOn provisioning Property Type Description errorCount integer ErrorCount records how many times the host has encoutered an error since the last successful operation errorMessage string the last error message reported by the provisioning subsystem errorType string ErrorType indicates the type of failure encountered when the OperationalStatus is OperationalStatusError goodCredentials object the last credentials we were able to validate as working hardware object The hardware discovered to exist on the host. hardwareProfile string The name of the profile matching the hardware details. lastUpdated string LastUpdated identifies when this status was last observed. operationHistory object OperationHistory holds information about operations performed on this host. operationalStatus string OperationalStatus holds the status of the host poweredOn boolean indicator for whether or not the host is powered on provisioning object Information tracked by the provisioner. triedCredentials object the last credentials we sent to the provisioning backend 3.1.15. .status.goodCredentials Description the last credentials we were able to validate as working Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.16. .status.goodCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.17. .status.hardware Description The hardware discovered to exist on the host. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 3.1.18. .status.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 3.1.19. .status.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 3.1.20. .status.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 3.1.21. .status.hardware.nics Description Type array 3.1.22. .status.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN 3.1.23. .status.hardware.nics[].vlans Description The VLANs available Type array 3.1.24. .status.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 3.1.25. .status.hardware.storage Description Type array 3.1.26. .status.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description hctl string The SCSI location of the device model string Hardware model name string The Linux device name of the disk, e.g. "/dev/sda". Note that this may not be stable across reboots. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 3.1.27. .status.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 3.1.28. .status.operationHistory Description OperationHistory holds information about operations performed on this host. Type object Property Type Description deprovision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. inspect object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. provision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. register object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. 3.1.29. .status.operationHistory.deprovision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.30. .status.operationHistory.inspect Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.31. .status.operationHistory.provision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.32. .status.operationHistory.register Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.33. .status.provisioning Description Information tracked by the provisioner. Type object Required ID state Property Type Description ID string The machine's UUID from the underlying provisioning tool bootMode string BootMode indicates the boot mode used to provision the node customDeploy object Custom deploy procedure applied to the host. firmware object The Bios set by the user image object Image holds the details of the last image successfully provisioned to the host. raid object The Raid set by the user rootDeviceHints object The RootDevicehints set by the user state string An indiciator for what the provisioner is doing with the host. 3.1.34. .status.provisioning.customDeploy Description Custom deploy procedure applied to the host. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.35. .status.provisioning.firmware Description The Bios set by the user Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.36. .status.provisioning.image Description Image holds the details of the last image successfully provisioned to the host. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image. e.g md5, sha256, sha512 format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.37. .status.provisioning.raid Description The Raid set by the user Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.38. .status.provisioning.rootDeviceHints Description The RootDevicehints set by the user Type object Property Type Description deviceName string A Linux device name like "/dev/vda". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.39. .status.triedCredentials Description the last credentials we sent to the provisioning backend Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.40. .status.triedCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/baremetalhosts GET : list objects of kind BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts DELETE : delete collection of BareMetalHost GET : list objects of kind BareMetalHost POST : create a BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} DELETE : delete a BareMetalHost GET : read the specified BareMetalHost PATCH : partially update the specified BareMetalHost PUT : replace the specified BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status GET : read status of the specified BareMetalHost PATCH : partially update status of the specified BareMetalHost PUT : replace status of the specified BareMetalHost 3.2.1. /apis/metal3.io/v1alpha1/baremetalhosts Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind BareMetalHost Table 3.2. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty 3.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of BareMetalHost Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BareMetalHost Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty HTTP method POST Description create a BareMetalHost Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body BareMetalHost schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 202 - Accepted BareMetalHost schema 401 - Unauthorized Empty 3.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the BareMetalHost namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a BareMetalHost Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BareMetalHost Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BareMetalHost Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BareMetalHost Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body BareMetalHost schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty 3.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the BareMetalHost namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified BareMetalHost Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BareMetalHost Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BareMetalHost Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body BareMetalHost schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/provisioning_apis/baremetalhost-metal3-io-v1alpha1 |
Chapter 8. ImageStream [image.openshift.io/v1] | Chapter 8. ImageStream [image.openshift.io/v1] Description An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a container image repository on a registry. Users typically update the spec.tags field to point to external images which are imported from container registries using credentials in your namespace with the pull secret type, or to existing image stream tags and images which are immediately accessible for tagging or pulling. The history of images applied to a tag is visible in the status.tags field and any user who can view an image stream is allowed to tag that image into their own image streams. Access to pull images from the integrated registry is granted by having the "get imagestreams/layers" permission on a given image stream. Users may remove a tag by deleting the imagestreamtag resource, which causes both spec and status for that tag to be removed. Image stream history is retained until an administrator runs the prune operation, which removes references that are no longer in use. To preserve a historical image, ensure there is a tag in spec pointing to that image by its digest. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object ImageStreamSpec represents options for ImageStreams. status object ImageStreamStatus contains information about the state of this image stream. 8.1.1. .spec Description ImageStreamSpec represents options for ImageStreams. Type object Property Type Description dockerImageRepository string dockerImageRepository is optional, if specified this stream is backed by a container repository on this server Deprecated: This field is deprecated as of v3.7 and will be removed in a future release. Specify the source for the tags to be imported in each tag via the spec.tags.from reference instead. lookupPolicy object ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. tags array tags map arbitrary string values to specific image locators tags[] object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. 8.1.2. .spec.lookupPolicy Description ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. Type object Required local Property Type Description local boolean local will change the docker short image references (like "mysql" or "php:latest") on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The tag's referencePolicy is taken into account on the replaced value. Only works within the current namespace. 8.1.3. .spec.tags Description tags map arbitrary string values to specific image locators Type array 8.1.4. .spec.tags[] Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 8.1.5. .spec.tags[].importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 8.1.6. .spec.tags[].referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 8.1.7. .status Description ImageStreamStatus contains information about the state of this image stream. Type object Required dockerImageRepository Property Type Description dockerImageRepository string DockerImageRepository represents the effective location this stream may be accessed at. May be empty until the server determines where the repository is located publicDockerImageRepository string PublicDockerImageRepository represents the public location from where the image can be pulled outside the cluster. This field may be empty if the administrator has not exposed the integrated registry externally. tags array Tags are a historical record of images associated with each tag. The first entry in the TagEvent array is the currently tagged image. tags[] object NamedTagEventList relates a tag to its image history. 8.1.8. .status.tags Description Tags are a historical record of images associated with each tag. The first entry in the TagEvent array is the currently tagged image. Type array 8.1.9. .status.tags[] Description NamedTagEventList relates a tag to its image history. Type object Required tag items Property Type Description conditions array Conditions is an array of conditions that apply to the tag event list. conditions[] object TagEventCondition contains condition information for a tag event. items array Standard object's metadata. items[] object TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. tag string Tag is the tag for which the history is recorded 8.1.10. .status.tags[].conditions Description Conditions is an array of conditions that apply to the tag event list. Type array 8.1.11. .status.tags[].conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 8.1.12. .status.tags[].items Description Standard object's metadata. Type array 8.1.13. .status.tags[].items[] Description TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. Type object Required created dockerImageReference image generation Property Type Description created Time Created holds the time the TagEvent was created dockerImageReference string DockerImageReference is the string that can be used to pull this image generation integer Generation is the spec tag generation that resulted in this tag being updated image string Image is the image 8.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagestreams GET : list or watch objects of kind ImageStream /apis/image.openshift.io/v1/watch/imagestreams GET : watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams DELETE : delete collection of ImageStream GET : list or watch objects of kind ImageStream POST : create an ImageStream /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams GET : watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name} DELETE : delete an ImageStream GET : read the specified ImageStream PATCH : partially update the specified ImageStream PUT : replace the specified ImageStream /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams/{name} GET : watch changes to an object of kind ImageStream. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/status GET : read status of the specified ImageStream PATCH : partially update status of the specified ImageStream PUT : replace status of the specified ImageStream 8.2.1. /apis/image.openshift.io/v1/imagestreams Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ImageStream Table 8.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamList schema 401 - Unauthorized Empty 8.2.2. /apis/image.openshift.io/v1/watch/imagestreams Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImageStream Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ImageStream Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK ImageStreamList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageStream Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body ImageStream schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 202 - Accepted ImageStream schema 401 - Unauthorized Empty 8.2.4. /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the ImageStream namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageStream Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageStream Table 8.23. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageStream Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageStream Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body ImageStream schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty 8.2.6. /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the ImageStream namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ImageStream. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.7. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/status Table 8.33. Global path parameters Parameter Type Description name string name of the ImageStream namespace string object name and auth scope, such as for teams and projects Table 8.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImageStream Table 8.35. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageStream Table 8.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.37. Body parameters Parameter Type Description body Patch schema Table 8.38. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageStream Table 8.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.40. Body parameters Parameter Type Description body ImageStream schema Table 8.41. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/image_apis/imagestream-image-openshift-io-v1 |
11.4. Durations | 11.4. Durations Durations are used to calculate a value for end when one is not supplied to in_range operations. They contain the same fields as date_spec objects but without the limitations (ie. you can have a duration of 19 months). Like date_specs , any field not supplied is ignored. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/_durations |
Chapter 18. Network policy | Chapter 18. Network policy 18.1. About network policy As a developer, you can define network policies that restrict traffic to pods in your cluster. 18.1.1. About network policy In a cluster using a network plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.12, OpenShift SDN supports using network policy in its default network isolation mode. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: Important To allow ingress connections from hostNetwork pods in the same namespace, you need to apply the allow-from-hostnetwork policy together with the allow-same-namespace policy. To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 18.1.1.1. Using the allow-from-router network policy Use the following NetworkPolicy to allow external traffic regardless of the router configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" 1 podSelector: {} policyTypes: - Ingress 1 policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes. 18.1.1.2. Using the allow-from-hostnetwork network policy Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: "" podSelector: {} policyTypes: - Ingress 18.1.2. Optimizations for network policy with OpenShift SDN Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. It is inefficient to apply NetworkPolicy objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a podSelector . For example, if the spec podSelector and the ingress podSelector within a NetworkPolicy object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. When designing your network policy, refer to the following guidelines: Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. NetworkPolicy objects that select a whole namespace, by using the namespaceSelector or an empty podSelector , generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. 18.1.3. Optimizations for network policy with OVN-Kubernetes network plugin When designing your network policy, refer to the following guidelines: For network policies with the same spec.podSelector spec, it is more efficient to use one network policy with multiple ingress or egress rules, than multiple network policies with subsets of ingress or egress rules. Every ingress or egress rule based on the podSelector or namespaceSelector spec generates the number of OVS flows proportional to number of pods selected by network policy + number of pods selected by ingress or egress rule . Therefore, it is preferable to use the podSelector or namespaceSelector spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod. For example, the following policy contains two rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend The following policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]} The same guideline applies to the spec.podSelector spec. If you have the same ingress or egress rules for different network policies, it might be more efficient to create one network policy with a common spec.podSelector spec. For example, the following two policies have different rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend The following network policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically. 18.1.4. steps Creating a network policy Optional: Defining a default network policy 18.1.5. Additional resources Projects and namespaces Configuring multitenant network policy NetworkPolicy API 18.2. Creating a network policy As a user with the admin role, you can create a network policy for a namespace. 18.2.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 18.2.2. Creating a network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 18.2.3. Creating a default deny all network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3 1 namespace: default deploys this policy to the default namespace. 2 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 3 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created 18.2.4. Creating a network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output networkpolicy.networking.k8s.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 18.2.5. Creating a network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output networkpolicy.networking.k8s.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 18.2.6. Creating a network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output networkpolicy.networking.k8s.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 18.2.7. Additional resources Accessing the web console Logging for egress firewall and network policy rules 18.3. Viewing a network policy As a user with the admin role, you can view a network policy for a namespace. 18.3.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 18.3.2. Viewing network policies using the CLI You can examine the network policies in a namespace. Note If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command: USD oc get networkpolicy Optional: To examine a specific network policy, enter the following command: USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example: USD oc describe networkpolicy allow-same-namespace Output for oc describe command Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 18.4. Editing a network policy As a user with the admin role, you can edit an existing network policy for a namespace. 18.4.1. Editing a network policy You can edit a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure Optional: To list the network policy objects in a namespace, enter the following command: USD oc get networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the network policy object. If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the network policy object directly, enter the following command: USD oc edit networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the network policy object is updated. USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 18.4.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 18.4.3. Additional resources Creating a network policy 18.5. Deleting a network policy As a user with the admin role, you can delete a network policy from a namespace. 18.5.1. Deleting a network policy using the CLI You can delete a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure To delete a network policy object, enter the following command: USD oc delete networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 18.6. Defining a default network policy for projects As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one. 18.6.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 18.6.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 18.7. Configuring multitenant isolation with network policy As a cluster administrator, you can configure your network policies to provide multitenant network isolation. Note If you are using the OpenShift SDN network plugin, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set. 18.7.1. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 18.7.2. steps Defining a default network policy 18.7.3. Additional resources OpenShift SDN network isolation modes | [
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"touch <policy_name>.yaml",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"oc apply -f <policy_name>.yaml -n <namespace>",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"networkpolicy.networking.k8s.io/web-allow-external created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"networkpolicy.networking.k8s.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"networkpolicy.networking.k8s.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc get networkpolicy",
"oc describe networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy allow-same-namespace",
"Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress",
"oc get networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy <policy_name> -n <namespace>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc delete networkpolicy <policy_name> -n <namespace>",
"networkpolicy.networking.k8s.io/default-deny deleted",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF",
"oc describe networkpolicy",
"Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/network-policy |
3.5.3. Disabling ACPI Completely in the grub.conf File | 3.5.3. Disabling ACPI Completely in the grub.conf File The preferred method of disabling ACPI Soft-Off is with chkconfig management ( Section 3.5.2, "Disabling ACPI Soft-Off with chkconfig Management" ). If the preferred method is not effective for your cluster, you can disable ACPI Soft-Off with the BIOS power management ( Section 3.5.1, "Disabling ACPI Soft-Off with the BIOS" ). If neither of those methods is effective for your cluster, you can disable ACPI completely by appending acpi=off to the kernel boot command line in the grub.conf file. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. You can disable ACPI completely by editing the grub.conf file of each cluster node as follows: Open /boot/grub/grub.conf with a text editor. Append acpi=off to the kernel boot command line in /boot/grub/grub.conf (see Example 3.2, "Kernel Boot Command Line with acpi=off Appended to It" ). Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . Example 3.2. Kernel Boot Command Line with acpi=off Appended to It In this example, acpi=off has been appended to the kernel boot command line - the line starting with "kernel /vmlinuz-2.6.32-193.el6.x86_64.img". | [
"grub.conf generated by anaconda # Note that you do not have to rerun grub after making changes to this file NOTICE: You have a /boot partition. This means that all kernel and initrd paths are relative to /boot/, eg. root (hd0,0) kernel /vmlinuz-version ro root=/dev/mapper/vg_doc01-lv_root initrd /initrd-[generic-]version.img #boot=/dev/hda default=0 timeout=5 serial --unit=0 --speed=115200 terminal --timeout=5 serial console title Red Hat Enterprise Linux Server (2.6.32-193.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-193.el6.x86_64 ro root=/dev/mapper/vg_doc01-lv_root console=ttyS0,115200n8 acpi=off initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-acpi-disable-boot-CA |
Chapter 55. Predicate Filter Action | Chapter 55. Predicate Filter Action Filter based on a JsonPath Expression 55.1. Configuration Options The following table summarizes the configuration options available for the predicate-filter-action Kamelet: Property Name Description Type Default Example expression * Expression The JsonPath Expression to evaluate, without the external parenthesis. Since this is a filter, the expression will be a negation, this means that if the foo field of the example is equals to John, the message will go ahead, otherwise it will be filtered out. string "@.foo =~ /.*John/" Note Fields marked with an asterisk (*) are mandatory. 55.2. Dependencies At runtime, the predicate-filter-action Kamelet relies upon the presence of the following dependencies: camel:core camel:kamelet camel:jsonpath 55.3. Usage This section describes how you can use the predicate-filter-action . 55.3.1. Knative Action You can use the predicate-filter-action Kamelet as an intermediate step in a Knative binding. predicate-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: "@.foo =~ /.*John/" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 55.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 55.3.1.2. Procedure for using the cluster CLI Save the predicate-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f predicate-filter-action-binding.yaml 55.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step predicate-filter-action -p "[email protected] =~ /.*John/" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 55.3.2. Kafka Action You can use the predicate-filter-action Kamelet as an intermediate step in a Kafka binding. predicate-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: "@.foo =~ /.*John/" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 55.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 55.3.2.2. Procedure for using the cluster CLI Save the predicate-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f predicate-filter-action-binding.yaml 55.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step predicate-filter-action -p "[email protected] =~ /.*John/" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 55.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/predicate-filter-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: \"@.foo =~ /.*John/\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f predicate-filter-action-binding.yaml",
"kamel bind timer-source?message=Hello --step predicate-filter-action -p \"[email protected] =~ /.*John/\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: \"@.foo =~ /.*John/\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f predicate-filter-action-binding.yaml",
"kamel bind timer-source?message=Hello --step predicate-filter-action -p \"[email protected] =~ /.*John/\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/predicate-filter-action |
Chapter 4. Important changes to OpenShift Jenkins images | Chapter 4. Important changes to OpenShift Jenkins images OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io . It also removes the OpenShift Jenkins Maven and NodeJS Agent images from its payload: OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . OpenShift Container Platform 4.10 deprecated the OpenShift Jenkins Maven and NodeJS Agent images. OpenShift Container Platform 4.11 removes these images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . These changes support the OpenShift Container Platform 4.10 recommendation to use multiple container Pod Templates with the Jenkins Kubernetes Plugin . 4.1. Relocation of OpenShift Jenkins images OpenShift Container Platform 4.11 makes significant changes to the location and availability of specific OpenShift Jenkins images. Additionally, you can configure when and how to update these images. What stays the same with the OpenShift Jenkins images? The Cluster Samples Operator manages the ImageStream and Template objects for operating the OpenShift Jenkins images. By default, the Jenkins DeploymentConfig object from the Jenkins pod template triggers a redeployment when the Jenkins image changes. By default, this image is referenced by the jenkins:2 image stream tag of Jenkins image stream in the openshift namespace in the ImageStream YAML file in the Samples Operator payload. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the deprecated maven and nodejs pod templates are still in the default image configuration. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the jenkins-agent-maven and jenkins-agent-nodejs image streams still exist in your cluster. To maintain these image streams, see the following section, "What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace?" What changes in the support matrix of the OpenShift Jenkins image? Each new image in the ocp-tools-4 repository in the registry.redhat.io registry supports multiple versions of OpenShift Container Platform. When Red Hat updates one of these new images, it is simultaneously available for all versions. This availability is ideal when Red Hat updates an image in response to a security advisory. Initially, this change applies to OpenShift Container Platform 4.11 and later. It is planned that this change will eventually apply to OpenShift Container Platform 4.9 and later. Previously, each Jenkins image supported only one version of OpenShift Container Platform and Red Hat might update those images sequentially over time. What additions are there with the OpenShift Jenkins and Jenkins Agent Base ImageStream and ImageStreamTag objects? By moving from an in-payload image stream to an image stream that references non-payload images, OpenShift Container Platform can define additional image stream tags. Red Hat has created a series of new image stream tags to go along with the existing "value": "jenkins:2" and "value": "image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest" image stream tags present in OpenShift Container Platform 4.10 and earlier. These new image stream tags address some requests to improve how the Jenkins-related image streams are maintained. About the new image stream tags: ocp-upgrade-redeploy To update your Jenkins image when you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag corresponds to the existing 2 image stream tag of the jenkins image stream and the latest image stream tag of the jenkins-agent-base-rhel8 image stream. It employs an image tag specific to only one SHA or image digest. When the ocp-tools-4 image changes, such as for Jenkins security advisories, Red Hat Engineering updates the Cluster Samples Operator payload. user-maintained-upgrade-redeploy To manually redeploy Jenkins after you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the least specific image version indicator available. When you redeploy Jenkins, run the following command: USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift . When you issue this command, the OpenShift Container Platform ImageStream controller accesses the registry.redhat.io image registry and stores any updated images in the OpenShift image registry's slot for that Jenkins ImageStreamTag object. Otherwise, if you do not run this command, your Jenkins deployment configuration does not trigger a redeployment. scheduled-upgrade-redeploy To automatically redeploy the latest version of the Jenkins image when it is released, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the periodic importing of image stream tags feature of the OpenShift Container Platform image stream controller, which checks for changes in the backing image. If the image changes, for example, due to a recent Jenkins security advisory, OpenShift Container Platform triggers a redeployment of your Jenkins deployment configuration. See "Configuring periodic importing of image stream tags" in the following "Additional resources." What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace? The OpenShift Jenkins Maven and NodeJS Agent images for OpenShift Container Platform were deprecated in 4.10, and are removed from the OpenShift Container Platform install payload in 4.11. They do not have alternatives defined in the ocp-tools-4 repository. However, you can work around this by using the sidecar pattern described in the "Jenkins agent" topic mentioned in the following "Additional resources" section. However, the Cluster Samples Operator does not delete the jenkins-agent-maven and jenkins-agent-nodejs image streams created by prior releases, which point to the tags of the respective OpenShift Container Platform payload images on registry.redhat.io . Therefore, you can pull updates to these images by running the following commands: USD oc import-image jenkins-agent-nodejs -n openshift USD oc import-image jenkins-agent-maven -n openshift 4.2. Customizing the Jenkins image stream tag To override the default upgrade behavior and control how the Jenkins image is upgraded, you set the image stream tag value that your Jenkins deployment configurations use. The default upgrade behavior is the behavior that existed when the Jenkins image was part of the install payload. The image stream tag names, 2 and ocp-upgrade-redeploy , in the jenkins-rhel.json image stream file use SHA-specific image references. Therefore, when those tags are updated with a new SHA, the OpenShift Container Platform image change controller automatically redeploys the Jenkins deployment configuration from the associated templates, such as jenkins-ephemeral.json or jenkins-persistent.json . For new deployments, to override that default value, you change the value of the JENKINS_IMAGE_STREAM_TAG in the jenkins-ephemeral.json Jenkins template. For example, replace the 2 in "value": "jenkins:2" with one of the following image stream tags: ocp-upgrade-redeploy , the default value, updates your Jenkins image when you upgrade OpenShift Container Platform. user-maintained-upgrade-redeploy requires you to manually redeploy Jenkins by running USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift after upgrading OpenShift Container Platform. scheduled-upgrade-redeploy periodically checks the given <image>:<tag> combination for changes and upgrades the image when it changes. The image change controller pulls the changed image and redeploys the Jenkins deployment configuration provisioned by the templates. For more information about this scheduled import policy, see the "Adding tags to image streams" in the following "Additional resources." Note To override the current upgrade value for existing deployments, change the values of the environment variables that correspond to those template parameters. Prerequisites You are running OpenShift Jenkins on OpenShift Container Platform 4.14. You know the namespace where OpenShift Jenkins is deployed. Procedure Set the image stream tag value, replacing <namespace> with namespace where OpenShift Jenkins is deployed and <image_stream_tag> with an image stream tag: Example USD oc patch dc jenkins -p '{"spec":{"triggers":[{"type":"ImageChange","imageChangeParams":{"automatic":true,"containerNames":["jenkins"],"from":{"kind":"ImageStreamTag","namespace":"<namespace>","name":"jenkins:<image_stream_tag>"}}}]}}' Tip Alternatively, to edit the Jenkins deployment configuration YAML, enter USD oc edit dc/jenkins -n <namespace> and update the value: 'jenkins:<image_stream_tag>' line. 4.3. Additional resources Adding tags to image streams Configuring periodic importing of image stream tags Jenkins agent Certified jenkins images Certified jenkins-agent-base images Certified jenkins-agent-maven images Certified jenkins-agent-nodejs images | [
"oc import-image jenkins-agent-nodejs -n openshift",
"oc import-image jenkins-agent-maven -n openshift",
"oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/jenkins/important-changes-to-openshift-jenkins-images |
Chapter 1. Support policy | Chapter 1. Support policy Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.442/openjdk8-support-policy |
Chapter 3. Configuring multi-architecture compute machines on an OpenShift cluster | Chapter 3. Configuring multi-architecture compute machines on an OpenShift cluster 3.1. About clusters with multi-architecture compute machines An OpenShift Container Platform cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Note When there are nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You need to ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Assigning pods to nodes . Important The Cluster Samples Operator is not supported on clusters with multi-architecture compute machines. Your cluster can be created without this capability. For more information, see Cluster capabilities . For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see Migrating to a cluster with multi-architecture compute machines . 3.1.1. Configuring your cluster with multi-architecture compute machines To create a cluster with multi-architecture compute machines with different installation options and platforms, you can use the documentation in the following table: Table 3.1. Cluster with multi-architecture compute machine installation options Documentation section Platform User-provisioned installation Installer-provisioned installation Control Plane Compute node Creating a cluster with multi-architecture compute machines on Azure Microsoft Azure [✓] aarch64 or x86_64 aarch64 , x86_64 Creating a cluster with multi-architecture compute machines on AWS Amazon Web Services (AWS) [✓] aarch64 or x86_64 aarch64 , x86_64 Creating a cluster with multi-architecture compute machines on GCP Google Cloud Platform (GCP) [✓] aarch64 or x86_64 aarch64 , x86_64 Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z Bare metal [✓] aarch64 or x86_64 aarch64 , x86_64 IBM Power [✓] x86_64 or ppc64le x86_64 , ppc64le IBM Z [✓] x86_64 or s390x x86_64 , s390x Creating a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE with z/VM IBM Z(R) and IBM(R) LinuxONE [✓] x86_64 x86_64 , s390x Creating a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE with RHEL KVM IBM Z(R) and IBM(R) LinuxONE [✓] x86_64 x86_64 , s390x Creating a cluster with multi-architecture compute machines on IBM Power(R) IBM Power(R) [✓] x86_64 x86_64 , ppc64le Important Autoscaling from zero is currently not supported on Google Cloud Platform (GCP). 3.2. Creating a cluster with multi-architecture compute machine on Azure To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations . You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines . After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster. 3.2.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.2.2. Creating a 64-bit ARM boot image using the Azure image gallery The following procedure describes how to manually generate a 64-bit ARM boot image. Prerequisites You installed the Azure CLI ( az ). You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. Procedure Log in to your Azure account: USD az login Create a storage account and upload the aarch64 virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group: USD az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1 1 The westus object is an example region. Create a storage container using the storage account you generated: USD az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} You must use the OpenShift Container Platform installation program JSON file to extract the URL and aarch64 VHD name: Extract the URL field and set it to RHCOS_VHD_ORIGIN_URL as the file name by running the following command: USD RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url') Extract the aarch64 VHD name and set it to BLOB_NAME as the file name by running the following command: USD BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands: USD end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'` USD sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv` Copy the RHCOS VHD into the storage container: USD az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token "USDsas" \ --source-uri "USD{RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "USD{BLOB_NAME}" --destination-container USD{CONTAINER_NAME} You can check the status of the copying process with the following command: USD az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy Example output { "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null } 1 If the status parameter displays the success object, the copying process is complete. Create an image gallery using the following command: USD az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} Use the image gallery to create an image definition. In the following example command, rhcos-arm64 is the name of the image definition. USD az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2 To get the URL of the VHD and set it to RHCOS_VHD_URL as the file name, run the following command: USD RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n "USD{BLOB_NAME}" -o tsv) Use the RHCOS_VHD_URL file, your storage account, resource group, and image gallery to create an image version. In the following example, 1.0.0 is the image version. USD az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL} Your arm64 boot image is now generated. You can access the ID of your image with the following command: USD az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0 The following example image ID is used in the recourseID parameter of the compute machine set: Example resourceID /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 3.2.3. Creating a 64-bit x86 boot image using the Azure image gallery The following procedure describes how to manually generate a 64-bit x86 boot image. Prerequisites You installed the Azure CLI ( az ). You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. Procedure Log in to your Azure account by running the following command: USD az login Create a storage account and upload the x86_64 virtual hard disk (VHD) to your storage account by running the following command. The OpenShift Container Platform installation program creates a resource group. However, the boot image can also be uploaded to a custom named resource group: USD az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1 1 The westus object is an example region. Create a storage container using the storage account you generated by running the following command: USD az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} Use the OpenShift Container Platform installation program JSON file to extract the URL and x86_64 VHD name: Extract the URL field and set it to RHCOS_VHD_ORIGIN_URL as the file name by running the following command: USD RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".url') Extract the x86_64 VHD name and set it to BLOB_NAME as the file name by running the following command: USD BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".release')-azure.x86_64.vhd Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container by running the following commands: USD end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'` USD sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv` Copy the RHCOS VHD into the storage container by running the following command: USD az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token "USDsas" \ --source-uri "USD{RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "USD{BLOB_NAME}" --destination-container USD{CONTAINER_NAME} You can check the status of the copying process by running the following command: USD az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy Example output { "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null } 1 If the status parameter displays the success object, the copying process is complete. Create an image gallery by running the following command: USD az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} Use the image gallery to create an image definition by running the following command: USD az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2 In this example command, rhcos-x86_64 is the name of the image definition. To get the URL of the VHD and set it to RHCOS_VHD_URL as the file name, run the following command: USD RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n "USD{BLOB_NAME}" -o tsv) Use the RHCOS_VHD_URL file, your storage account, resource group, and image gallery to create an image version by running the following command: USD az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL} In this example, 1.0.0 is the image version. Optional: Access the ID of the generated x86_64 boot image by running the following command: USD az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-x86_64 -e 1.0.0 The following example image ID is used in the recourseID parameter of the compute machine set: Example resourceID /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-x86_64/versions/1.0.0 3.2.4. Adding a multi-architecture compute machine set to your Azure cluster After creating a multi-architecture cluster, you can add nodes with different architectures. You can add multi-architecture compute machines to a multi-architecture cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. To create a custom compute machine set on Azure, see "Creating a compute machine set on Azure". Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". Prerequisites You installed the OpenShift CLI ( oc ). You created a 64-bit ARM or 64-bit x86 boot image. You used the installation program to create a 64-bit ARM or 64-bit x86 single-architecture Azure cluster with the multi-architecture installer binary. Procedure Log in to the OpenShift CLI ( oc ). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster. Example MachineSet object for an Azure 64-bit ARM or 64-bit x86 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: "" version: "" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: "<zone>" 1 Set the resourceID parameter to either arm64 or amd64 boot image. 2 Set the vmSize parameter to the instance type used in your installation. Some example instance types are Standard_D4ps_v5 or D8ps . Create the compute machine set by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file with compute machine set configuration. For example: arm64-machine-set-0.yaml , or amd64-machine-set-0.yaml . Verification Verify that the new machines are running by running the following command: USD oc get machineset -n openshift-machine-api The output must include the machine set that you created. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-machine-set-0 2 2 2 2 10m You can check if the nodes are ready and schedulable by running the following command: USD oc get nodes Additional resources Creating a compute machine set on Azure Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator 3.3. Creating a cluster with multi-architecture compute machines on AWS To create an AWS cluster with multi-architecture compute machines, you must first create a single-architecture AWS installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, see Installing a cluster on AWS with customizations . You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines . After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster. 3.3.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.3.2. Adding a multi-architecture compute machine set to your AWS cluster After creating a multi-architecture cluster, you can add nodes with different architectures. You can add multi-architecture compute machines to a multi-architecture cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". Prerequisites You installed the OpenShift CLI ( oc ). You used the installation program to create an 64-bit ARM or 64-bit x86 single-architecture AWS cluster with the multi-architecture installer binary. Procedure Log in to the OpenShift CLI ( oc ). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster. Example MachineSet object for an AWS 64-bit ARM or x86 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-subnet-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data 1 2 3 9 13 14 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath="{.status.infrastructureName}{'\n'}" infrastructure cluster 4 7 Specify the infrastructure ID, role node label, and zone. 5 6 Specify the role node label to add. 8 Specify a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS region for the nodes. The RHCOS AMI must be compatible with the machine architecture. USD oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.<arch>.images.aws.regions."<region>".image' 10 Specify a machine type that aligns with the CPU architecture of the chosen AMI. For more information, see "Tested instance types for AWS 64-bit ARM" 11 Specify the zone. For example, us-east-1a . Ensure that the zone you select has machines with the required architecture. 12 Specify the region. For example, us-east-1 . Ensure that the zone you select has machines with the required architecture. Create the compute machine set by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file with compute machine set configuration. For example: aws-arm64-machine-set-0.yaml , or aws-amd64-machine-set-0.yaml . Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api The output must include the machine set that you created. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-machine-set-0 2 2 2 2 10m You can check if the nodes are ready and schedulable by running the following command: USD oc get nodes Additional resources Tested instance types for AWS 64-bit ARM Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator 3.4. Creating a cluster with multi-architecture compute machines on GCP To create a Google Cloud Platform (GCP) cluster with multi-architecture compute machines, you must first create a single-architecture GCP installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, see Installing a cluster on GCP with customizations . You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines . After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster. Note Secure booting is currently not supported on 64-bit ARM machines for GCP 3.4.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.4.2. Adding a multi-architecture compute machine set to your GCP cluster After creating a multi-architecture cluster, you can add nodes with different architectures. You can add multi-architecture compute machines to a multi-architecture cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". Prerequisites You installed the OpenShift CLI ( oc ). You used the installation program to create a 64-bit x86 or 64-bit ARM single-architecture GCP cluster with the multi-architecture installer binary. Procedure Log in to the OpenShift CLI ( oc ). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster. Example MachineSet object for a GCP 64-bit ARM or 64-bit x86 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the role node label to add. 3 Specify the path to the image that is used in current compute machine sets. You need the project and image name for your path to image. To access the project and image name, run the following command: USD oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.aarch64.images.gcp' Example output "gcp": { "release": "415.92.202309142014-0", "project": "rhcos-cloud", "name": "rhcos-415-92-202309142014-0-gcp-aarch64" } Use the project and name parameters from the output to create the path to image field in your machine set. The path to the image should follow the following format: USD projects/<project>/global/images/<image_name> 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 Specify a machine type that aligns with the CPU architecture of the chosen OS image. For more information, see "Tested instance types for GCP on 64-bit ARM infrastructures". 6 Specify the name of the GCP project that you use for your cluster. 7 Specify the region. For example, us-central1 . Ensure that the zone you select has machines with the required architecture. Create the compute machine set by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file with compute machine set configuration. For example: gcp-arm64-machine-set-0.yaml , or gcp-amd64-machine-set-0.yaml . Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api The output must include the machine set that you created. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m You can check if the nodes are ready and schedulable by running the following command: USD oc get nodes Additional resources Tested instance types for GCP on 64-bit ARM infrastructures Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator 3.5. Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z To create a cluster with multi-architecture compute machines on bare metal ( x86_64 or aarch64 ), IBM Power(R) ( ppc64le ), or IBM Z(R) ( s390x ) you must have an existing single-architecture cluster on one of these platforms. Follow the installations procedures for your platform: Installing a user provisioned cluster on bare metal . You can then add 64-bit ARM compute machines to your OpenShift Container Platform cluster on bare metal. Installing a cluster on IBM Power(R) . You can then add x86_64 compute machines to your OpenShift Container Platform cluster on IBM Power(R). Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines to your OpenShift Container Platform cluster on IBM Z(R) and IBM(R) LinuxONE. Important The bare metal installer-provisioned infrastructure and the Bare Metal Operator do not support adding secondary architecture nodes during the initial cluster setup. You can add secondary architecture nodes manually only after the initial cluster setup. Before you can add additional compute nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This allows you to add additional nodes to your cluster and deploy a cluster with multi-architecture compute machines. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.5.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.5.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 3.5.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 3.5.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.6. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) with z/VM, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using a z/VM instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. To create an IBM Z(R) or IBM(R) LinuxONE ( s390x ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.6.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.6.2. Creating RHCOS machines on IBM Z with z/VM You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z(R) with z/VM and attach them to your existing cluster. Prerequisites You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<http_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs , and rootfs files to an HTTP or HTTPS server that is accessible from the RHCOS guest you want to add. Create a parameter file for the guest. The following parameters are specific for the virtual machine: Optional: To specify a static IP address, add an ip= parameter with the following entries, with each separated by a colon: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. The value none . For coreos.inst.ignition_url= , specify the URL to the worker.ign file. Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. You can adjust further parameters if required. The following is an example parameter file, additional-worker-dasd.parm : cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure that you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. You can adjust further parameters if required. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Machine configuration . The following is an example parameter file, additional-worker-fcp.parm for a worker node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure that you have no newline characters. Transfer the initramfs , kernel , parameter files, and RHCOS images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Booting the installation on IBM Z(R) to install RHEL in z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine. See PUNCH in IBM(R) Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: See IPL in IBM(R) Documentation. 3.6.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.7. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE in an LPAR To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) in an LPAR, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an LPAR instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. Note To create an IBM Z(R) or IBM(R) LinuxONE ( s390x ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . 3.7.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.7.2. Creating RHCOS machines on IBM Z in an LPAR You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z(R) in a logical partition (LPAR) and attach them to your existing cluster. Prerequisites You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<http_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs , and rootfs files to an HTTP or HTTPS server that is accessible from the RHCOS guest you want to add. Create a parameter file for the guest. The following parameters are specific for the virtual machine: Optional: To specify a static IP address, add an ip= parameter with the following entries, with each separated by a colon: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. The value none . For coreos.inst.ignition_url= , specify the URL to the worker.ign file. Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. You can adjust further parameters if required. The following is an example parameter file, additional-worker-dasd.parm : cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure that you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. You can adjust further parameters if required. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Machine configuration . The following is an example parameter file, additional-worker-fcp.parm for a worker node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure that you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Booting the installation on IBM Z(R) to install RHEL in an LPAR . Boot the machine 3.7.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.8. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) with RHEL KVM, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using a RHEL KVM instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. To create an IBM Z(R) or IBM(R) LinuxONE ( s390x ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.8.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.8.2. Creating RHCOS machines using virt-install You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using virt-install . Prerequisites You have at least one LPAR running on RHEL 8.7 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<HTTP_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs , and rootfs files to an HTTP or HTTPS server before you launch virt-install . Create the new KVM guest nodes using the RHEL kernel , initramfs , and Ignition files; the new disk image; and adjusted parm line arguments. USD virt-install \ --connect qemu:///system \ --name <vm_name> \ --autostart \ --os-variant rhel9.4 \ 1 --cpu host \ --vcpus <vcpus> \ --memory <memory_mb> \ --disk <vm_name>.qcow2,size=<image_size> \ --network network=<virt_network_parm> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ 2 --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/vda" \ --extra-args "coreos.inst.ignition_url=http://<http_server>/worker.ign " \ 3 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 4 --extra-args "ip=<ip>::<gateway>:<netmask>:<hostname>::none" \ 5 --extra-args "nameserver=<dns>" \ --extra-args "console=ttysclp0" \ --noautoconsole \ --wait 1 For os-variant , specify the RHEL version for the RHCOS compute machine. rhel9.4 is the recommended version. To query the supported RHEL version of your operating system, run the following command: USD osinfo-query os -f short-id Note The os-variant is case sensitive. 2 For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. 3 Specify the location of the worker.ign config file. Only HTTP and HTTPS protocols are supported. 4 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported 5 Optional: For hostname , specify the fully qualified hostname of the client machine. Note If you are using HAProxy as a load balancer, update your HAProxy rules for ingress-router-443 and ingress-router-80 in the /etc/haproxy/haproxy.cfg configuration file. Continue to create more compute machines for your cluster. 3.8.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.9. Creating a cluster with multi-architecture compute machines on IBM Power To create a cluster with multi-architecture compute machines on IBM Power(R) ( ppc64le ), you must have an existing single-architecture ( x86_64 ) cluster. You can then add ppc64le compute machines to your OpenShift Container Platform cluster. Important Before you can add ppc64le nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add ppc64le nodes to your cluster and deploy a cluster with multi-architecture compute machines. To create an IBM Power(R) ( ppc64le ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Power(R) . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.9.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Note When using multiple architectures, hosts for OpenShift Container Platform nodes must share the same storage layer. If they do not have the same storage layer, use a storage provider such as nfs-provisioner . Note You should limit the number of network hops between the compute and control plane as much as possible. Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.9.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 3.9.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + ppc64le ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on ppc64le architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on ppc64le : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 3.9.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.30.3 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.30.3 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.30.3 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.30.3 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.30.3 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.30.3 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.30.3 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.10. Managing a cluster with multi-architecture compute machines Managing a cluster that has nodes with multiple architectures requires you to consider node architecture as you monitor the cluster and manage your workloads. This requires you to take additional considerations into account when you configure cluster resource requirements and behavior, or schedule workloads in a multi-architecture cluster. 3.10.1. Scheduling workloads on clusters with multi-architecture compute machines When you deploy workloads on a cluster with compute nodes that use different architectures, you must align pod architecture with the architecture of the underlying node. Your workload may also require additional configuration to particular resources depending on the underlying node architecture. You can use the Multiarch Tuning Operator to enable architecture-aware scheduling of workloads on clusters with multi-architecture compute machines. The Multiarch Tuning Operator implements additional scheduler predicates in the pods specifications based on the architectures that the pods can support at creation time. 3.10.1.1. Sample multi-architecture node workload deployments Scheduling a workload to an appropriate node based on architecture works in the same way as scheduling based on any other node characteristic. Consider the following options when determining how to schedule your workloads. Using nodeAffinity to schedule nodes with specific architectures You can allow a workload to be scheduled on only a set of nodes with architectures supported by its images, you can set the spec.affinity.nodeAffinity field in your pod's template specification. apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64 1 Specify the supported architectures. Valid values include amd64 , arm64 , or both values. Tainting each node for a specific architecture You can taint a node to avoid the node scheduling workloads that are incompatible with its architecture. When your cluster uses a MachineSet object, you can add parameters to the .spec.template.spec.taints field to avoid workloads being scheduled on nodes with non-supported architectures. Before you add a taint to a node, you must scale down the MachineSet object or remove existing available machines. For more information, see Modifying a compute machine set . apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... spec: # ... template: # ... spec: # ... taints: - effect: NoSchedule key: multiarch.openshift.io/arch value: arm64 You can also set a taint on a specific node by running the following command: USD oc adm taint nodes <node-name> multiarch.openshift.io/arch=arm64:NoSchedule Creating a default toleration in a namespace When a node or machine set has a taint, only workloads that tolerate that taint can be scheduled. You can annotate a namespace so all of the workloads get the same default toleration by running the following command: USD oc annotate namespace my-namespace \ 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Exists", "effect": "NoSchedule", "key": "multiarch.openshift.io/arch"}]' Tolerating architecture taints in workloads When a node or machine set has a taint, only workloads that tolerate that taint can be scheduled. You can configure your workload with a toleration so that it is scheduled on nodes with specific architecture taints. apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: tolerations: - key: "multiarch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule" This example deployment can be scheduled on nodes and machine sets that have the multiarch.openshift.io/arch=arm64 taint specified. Using node affinity with taints and tolerations When a scheduler computes the set of nodes to schedule a pod, tolerations can broaden the set while node affinity restricts the set. If you set a taint on nodes that have a specific architecture, you must also add a toleration to workloads that you want to be scheduled there. apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: "multiarch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule" Additional resources Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator Controlling pod placement using node taints Controlling pod placement on nodes using node affinity Controlling pod placement using the scheduler Modifying a compute machine set 3.10.2. Enabling 64k pages on the Red Hat Enterprise Linux CoreOS (RHCOS) kernel You can enable the 64k memory page in the Red Hat Enterprise Linux CoreOS (RHCOS) kernel on the 64-bit ARM compute machines in your cluster. The 64k page size kernel specification can be used for large GPU or high memory workloads. This is done using the Machine Config Operator (MCO) which uses a machine config pool to update the kernel. To enable 64k page sizes, you must dedicate a machine config pool for ARM64 to enable on the kernel. Important Using 64k pages is exclusive to 64-bit ARM architecture compute nodes or clusters installed on 64-bit ARM machines. If you configure the 64k pages kernel on a machine config pool using 64-bit x86 machines, the machine config pool and MCO will degrade. Prerequisites You installed the OpenShift CLI ( oc ). You created a cluster with compute nodes of different architecture on one of the supported platforms. Procedure Label the nodes where you want to run the 64k page size kernel: USD oc label node <node_name> <label> Example command USD oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages= Create a machine config pool that contains the worker role that uses the ARM64 architecture and the worker-64k-pages role: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-64k-pages spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-64k-pages nodeSelector: matchLabels: node-role.kubernetes.io/worker-64k-pages: "" kubernetes.io/arch: arm64 Create a machine config on your compute node to enable 64k-pages with the 64k-pages parameter. USD oc create -f <filename>.yaml Example MachineConfig apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker-64k-pages" 1 name: 99-worker-64kpages spec: kernelType: 64k-pages 2 1 Specify the value of the machineconfiguration.openshift.io/role label in the custom machine config pool. The example MachineConfig uses the worker-64k-pages label to enable 64k pages in the worker-64k-pages pool. 2 Specify your desired kernel type. Valid values are 64k-pages and default Note The 64k-pages type is supported on only 64-bit ARM architecture based compute nodes. The realtime type is supported on only 64-bit x86 architecture based compute nodes. Verification To view your new worker-64k-pages machine config pool, run the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m 3.10.3. Importing manifest lists in image streams on your multi-architecture compute machines On an OpenShift Container Platform 4.17 cluster with multi-architecture compute machines, the image streams in the cluster do not import manifest lists automatically. You must manually change the default importMode option to the PreserveOriginal option in order to import the manifest list. Prerequisites You installed the OpenShift Container Platform CLI ( oc ). Procedure The following example command shows how to patch the ImageStream cli-artifacts so that the cli-artifacts:latest image stream tag is imported as a manifest list. USD oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}' Verification You can check that the manifest lists imported properly by inspecting the image stream tag. The following command will list the individual architecture manifests for a particular tag. USD oc get istag cli-artifacts:latest -n openshift -oyaml If the dockerImageManifests object is present, then the manifest list import was successful. Example output of the dockerImageManifests object dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux 3.11. Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments. Architecture-aware workload scheduling allows the scheduler to place pods onto nodes that match the architecture of the pod images. By default, the scheduler does not consider the architecture of a pod's container images when determining the placement of new pods onto nodes. To enable architecture-aware workload scheduling, you must create the ClusterPodPlacementConfig object. When you create the ClusterPodPlacementConfig object, the Multiarch Tuning Operator deploys the necessary operands to support architecture-aware workload scheduling. You can also use the nodeAffinityScoring plugin in the ClusterPodPlacementConfig object to set cluster-wide scores for node architectures. If you enable the nodeAffinityScoring plugin, the scheduler first filters nodes with compatible architectures and then places the pod on the node with the highest score. When a pod is created, the operands perform the following actions: Add the multiarch.openshift.io/scheduling-gate scheduling gate that prevents the scheduling of the pod. Compute a scheduling predicate that includes the supported architecture values for the kubernetes.io/arch label. Integrate the scheduling predicate as a nodeAffinity requirement in the pod specification. Remove the scheduling gate from the pod. Important Note the following operand behaviors: If the nodeSelector field is already configured with the kubernetes.io/arch label for a workload, the operand does not update the nodeAffinity field for that workload. If the nodeSelector field is not configured with the kubernetes.io/arch label for a workload, the operand updates the nodeAffinity field for that workload. However, in that nodeAffinity field, the operand updates only the node selector terms that are not configured with the kubernetes.io/arch label. If the nodeName field is already set, the Multiarch Tuning Operator does not process the pod. If the pod is owned by a DaemonSet, the operand does not update the the nodeAffinity field. If both nodeSelector or nodeAffinity and preferredAffinity fields are set for the kubernetes.io/arch label, the operand does not update the nodeAffinity field. If only nodeSelector or nodeAffinity field is set for the kubernetes.io/arch label and the nodeAffinityScoring plugin is disabled, the operand does not update the nodeAffinity field. If the nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution field already contains terms that score nodes based on the kubernetes.io/arch label, the operand ignores the configuration in the nodeAffinityScoring plugin. 3.11.1. Installing the Multiarch Tuning Operator by using the CLI You can install the Multiarch Tuning Operator by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. Procedure Create a new project named openshift-multiarch-tuning-operator by running the following command: USD oc create ns openshift-multiarch-tuning-operator Create an OperatorGroup object: Create a YAML file with the configuration for creating an OperatorGroup object. Example YAML configuration for creating an OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: {} Create the OperatorGroup object by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file that contains the OperatorGroup object configuration. Create a Subscription object: Create a YAML file with the configuration for creating a Subscription object. Example YAML configuration for creating a Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: channel: stable name: multiarch-tuning-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic startingCSV: multiarch-tuning-operator.<version> Create the Subscription object by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file that contains the Subscription object configuration. Note For more details about configuring the Subscription object and OperatorGroup object, see "Installing from OperatorHub using the CLI". Verification To verify that the Multiarch Tuning Operator is installed, run the following command: USD oc get csv -n openshift-multiarch-tuning-operator Example output NAME DISPLAY VERSION REPLACES PHASE multiarch-tuning-operator.<version> Multiarch Tuning Operator <version> multiarch-tuning-operator.1.0.0 Succeeded The installation is successful if the Operator is in Succeeded phase. Optional: To verify that the OperatorGroup object is created, run the following command: USD oc get operatorgroup -n openshift-multiarch-tuning-operator Example output NAME AGE openshift-multiarch-tuning-operator-q8zbb 133m Optional: To verify that the Subscription object is created, run the following command: USD oc get subscription -n openshift-multiarch-tuning-operator Example output NAME PACKAGE SOURCE CHANNEL multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable Additional resources Installing from OperatorHub using the CLI 3.11.2. Installing the Multiarch Tuning Operator by using the web console You can install the Multiarch Tuning Operator by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Enter Multiarch Tuning Operator in the search field. Click Multiarch Tuning Operator . Select the Multiarch Tuning Operator version from the Version list. Click Install Set the following options on the Operator Installation page: Set Update Channel to stable . Set Installation Mode to All namespaces on the cluster . Set Installed Namespace to Operator recommended Namespace or Select a Namespace . The recommended Operator namespace is openshift-multiarch-tuning-operator . If the openshift-multiarch-tuning-operator namespace does not exist, it is created during the operator installation. If you select Select a namespace , you must select a namespace for the Operator from the Select Project list. Update approval as Automatic or Manual . If you select Automatic updates, Operator Lifecycle Manager (OLM) automatically updates the running instance of the Multiarch Tuning Operator without any intervention. If you select Manual updates, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Multiarch Tuning Operator to a newer version. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Click Install . Verification Navigate to Operators Installed Operators . Verify that the Multiarch Tuning Operator is listed with the Status field as Succeeded in the openshift-multiarch-tuning-operator namespace. 3.11.3. Multiarch Tuning Operator pod labels and architecture support overview After installing the Multiarch Tuning Operator, you can verify the multi-architecture support for workloads in your cluster. You can identify and manage pods based on their architecture compatibility by using the pod labels. These labels are automatically set on the newly created pods to provide insights into their architecture support. The following table describes the labels that the Multiarch Tuning Operator adds when you create a pod: Table 3.2. Pod labels that the Multiarch Tuning Operator adds when you create a pod Label Description multiarch.openshift.io/multi-arch: "" The pod supports multiple architectures. multiarch.openshift.io/single-arch: "" The pod supports only a single architecture. multiarch.openshift.io/arm64: "" The pod supports the arm64 architecture. multiarch.openshift.io/amd64: "" The pod supports the amd64 architecture. multiarch.openshift.io/ppc64le: "" The pod supports the ppc64le architecture. multiarch.openshift.io/s390x: "" The pod supports the s390x architecture. multirach.openshift.io/node-affinity: set The Operator has set the node affinity requirement for the architecture. multirach.openshift.io/node-affinity: not-set The Operator did not set the node affinity requirement. For example, when the pod already has a node affinity for the architecture, the Multiarch Tuning Operator adds this label to the pod. multiarch.openshift.io/scheduling-gate: gated The pod is gated. multiarch.openshift.io/scheduling-gate: removed The pod gate has been removed. multiarch.openshift.io/inspection-error: "" An error has occurred while building the node affinity requirements. multiarch.openshift.io/preferred-node-affinity: set The Operator has set the architecture preferences in the pod. multiarch.openshift.io/preferred-node-affinity: not-set The Operator did not set the architecture preferences in the pod because the user had already set them in the preferredDuringSchedulingIgnoredDuringExecution node affinity. 3.11.4. Creating the ClusterPodPlacementConfig object After installing the Multiarch Tuning Operator, you must create the ClusterPodPlacementConfig object. When you create this object, the Multiarch Tuning Operator deploys an operand that enables architecture-aware workload scheduling. Note You can create only one instance of the ClusterPodPlacementConfig object. Example ClusterPodPlacementConfig object configuration apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster 1 spec: logVerbosityLevel: Normal 2 namespaceSelector: 3 matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: 4 nodeAffinityScoring: 5 enabled: true 6 platforms: 7 - architecture: amd64 8 weight: 100 9 - architecture: arm64 weight: 50 1 You must set this field value to cluster . 2 Optional: You can set the field value to Normal , Debug , Trace , or TraceAll . The value is set to Normal by default. 3 Optional: You can configure the namespaceSelector to select the namespaces in which the Multiarch Tuning Operator's pod placement operand must process the nodeAffinity of the pods. All namespaces are considered by default. 4 Optional: Includes a list of plugins for architecture-aware workload scheduling. 5 Optional: You can use this plugin to set architecture preferences for pod placement. When enabled, the scheduler first filters out nodes that do not meet the pod's requirements. Then, it prioritizes the remaining nodes based on the architecture scores defined in the nodeAffinityScoring.platforms field. 6 Optional: Set this field to true to enable the nodeAffinityScoring plugin. The default value is false . 7 Optional: Defines a list of architectures and their corresponding scores. 8 Specify the node architecture to score. The scheduler prioritizes nodes for pod placement based on the architecture scores that you set and the scheduling requirements defined in the pod specification. Accepted values are arm64 , amd64 , ppc64le , or s390x . 9 Assign a score to the architecture. The value for this field must be configured in the range of 1 (lowest priority) to 100 (highest priority). The scheduler uses this score to prioritize nodes for pod placement, favoring nodes with architectures that have higher scores. In this example, the operator field value is set to DoesNotExist . Therefore, if the key field value ( multiarch.openshift.io/exclude-pod-placement ) is set as a label in a namespace, the operand does not process the nodeAffinity of the pods in that namespace. Instead, the operand processes the nodeAffinity of the pods in namespaces that do not contain the label. If you want the operand to process the nodeAffinity of the pods only in specific namespaces, you can configure the namespaceSelector as follows: namespaceSelector: matchExpressions: - key: multiarch.openshift.io/include-pod-placement operator: Exists In this example, the operator field value is set to Exists . Therefore, the operand processes the nodeAffinity of the pods only in namespaces that contain the multiarch.openshift.io/include-pod-placement label. Important This Operator excludes pods in namespaces starting with kube- . It also excludes pods that are expected to be scheduled on control plane nodes. 3.11.4.1. Creating the ClusterPodPlacementConfig object by using the CLI To deploy the pod placement operand that enables architecture-aware workload scheduling, you can create the ClusterPodPlacementConfig object by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. You have installed the Multiarch Tuning Operator. Procedure Create a ClusterPodPlacementConfig object YAML file: Example ClusterPodPlacementConfig object configuration apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster spec: logVerbosityLevel: Normal namespaceSelector: matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: nodeAffinityScoring: enabled: true platforms: - architecture: amd64 weight: 100 - architecture: arm64 weight: 50 Create the ClusterPodPlacementConfig object by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the ClusterPodPlacementConfig object YAML file. Verification To check that the ClusterPodPlacementConfig object is created, run the following command: USD oc get clusterpodplacementconfig Example output NAME AGE cluster 29s 3.11.4.2. Creating the ClusterPodPlacementConfig object by using the web console To deploy the pod placement operand that enables architecture-aware workload scheduling, you can create the ClusterPodPlacementConfig object by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Multiarch Tuning Operator. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . On the Installed Operators page, click Multiarch Tuning Operator . Click the Cluster Pod Placement Config tab. Select either Form view or YAML view . Configure the ClusterPodPlacementConfig object parameters. Click Create . Optional: If you want to edit the ClusterPodPlacementConfig object, perform the following actions: Click the Cluster Pod Placement Config tab. Select Edit ClusterPodPlacementConfig from the options menu. Click YAML and edit the ClusterPodPlacementConfig object parameters. Click Save . Verification On the Cluster Pod Placement Config page, check that the ClusterPodPlacementConfig object is in the Ready state. 3.11.5. Deleting the ClusterPodPlacementConfig object by using the CLI You can create only one instance of the ClusterPodPlacementConfig object. If you want to re-create this object, you must first delete the existing instance. You can delete this object by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. Procedure Log in to the OpenShift CLI ( oc ). Delete the ClusterPodPlacementConfig object by running the following command: USD oc delete clusterpodplacementconfig cluster Verification To check that the ClusterPodPlacementConfig object is deleted, run the following command: USD oc get clusterpodplacementconfig Example output No resources found 3.11.6. Deleting the ClusterPodPlacementConfig object by using the web console You can create only one instance of the ClusterPodPlacementConfig object. If you want to re-create this object, you must first delete the existing instance. You can delete this object by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have created the ClusterPodPlacementConfig object. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . On the Installed Operators page, click Multiarch Tuning Operator . Click the Cluster Pod Placement Config tab. Select Delete ClusterPodPlacementConfig from the options menu. Click Delete . Verification On the Cluster Pod Placement Config page, check that the ClusterPodPlacementConfig object has been deleted. 3.11.7. Uninstalling the Multiarch Tuning Operator by using the CLI You can uninstall the Multiarch Tuning Operator by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. You deleted the ClusterPodPlacementConfig object. Important You must delete the ClusterPodPlacementConfig object before uninstalling the Multiarch Tuning Operator. Uninstalling the Operator without deleting the ClusterPodPlacementConfig object can lead to unexpected behavior. Procedure Get the Subscription object name for the Multiarch Tuning Operator by running the following command: USD oc get subscription.operators.coreos.com -n <namespace> 1 1 Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output NAME PACKAGE SOURCE CHANNEL openshift-multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable Get the currentCSV value for the Multiarch Tuning Operator by running the following command: USD oc get subscription.operators.coreos.com <subscription_name> -n <namespace> -o yaml | grep currentCSV 1 1 Replace <subscription_name> with the Subscription object name. For example: openshift-multiarch-tuning-operator . Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output currentCSV: multiarch-tuning-operator.<version> Delete the Subscription object by running the following command: USD oc delete subscription.operators.coreos.com <subscription_name> -n <namespace> 1 1 Replace <subscription_name> with the Subscription object name. Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output subscription.operators.coreos.com "openshift-multiarch-tuning-operator" deleted Delete the CSV for the Multiarch Tuning Operator in the target namespace using the currentCSV value by running the following command: USD oc delete clusterserviceversion <currentCSV_value> -n <namespace> 1 1 Replace <currentCSV> with the currentCSV value for the Multiarch Tuning Operator. For example: multiarch-tuning-operator.<version> . Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output clusterserviceversion.operators.coreos.com "multiarch-tuning-operator.<version>" deleted Verification To verify that the Multiarch Tuning Operator is uninstalled, run the following command: USD oc get csv -n <namespace> 1 1 Replace <namespace> with the name of the namespace where you have uninstalled the Multiarch Tuning Operator. Example output No resources found in openshift-multiarch-tuning-operator namespace. 3.11.8. Uninstalling the Multiarch Tuning Operator by using the web console You can uninstall the Multiarch Tuning Operator by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin permissions. You deleted the ClusterPodPlacementConfig object. Important You must delete the ClusterPodPlacementConfig object before uninstalling the Multiarch Tuning Operator. Uninstalling the Operator without deleting the ClusterPodPlacementConfig object can lead to unexpected behavior. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Enter Multiarch Tuning Operator in the search field. Click Multiarch Tuning Operator . Click the Details tab. From the Actions menu, select Uninstall Operator . When prompted, click Uninstall . Verification Navigate to Operators Installed Operators . On the Installed Operators page, verify that the Multiarch Tuning Operator is not listed. 3.12. Multiarch Tuning Operator release notes The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments. These release notes track the development of the Multiarch Tuning Operator. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.12.1. Release notes for the Multiarch Tuning Operator 1.1.0 Issued: 18 March 2024 3.12.1.1. New features and enhancements The Multiarch Tuning Operator is now supported on managed offerings, including ROSA with Hosted Control Planes (HCP) and other HCP environments. With this release, you can configure architecture-aware workload scheduling by using the new plugins field in the ClusterPodPlacementConfig object. You can use the plugins.nodeAffinityScoring field to set architecture preferences for pod placement. If you enable the nodeAffinityScoring plugin, the scheduler first filters out nodes that do not meet the pod requirements. Then, the scheduler prioritizes the remaining nodes based on the architecture scores defined in the nodeAffinityScoring.platforms field. 3.12.1.1.1. Bug fixes With this release, the Multiarch Tuning Operator does not update the nodeAffinity field for pods that are managed by a daemon set. ( OCPBUGS-45885 ) 3.12.2. Release notes for the Multiarch Tuning Operator 1.0.0 Issued: 31 October 2024 3.12.2.1. New features and enhancements With this release, the Multiarch Tuning Operator supports custom network scenarios and cluster-wide custom registries configurations. With this release, you can identify pods based on their architecture compatibility by using the pod labels that the Multiarch Tuning Operator adds to newly created pods. With this release, you can monitor the behavior of the Multiarch Tuning Operator by using the metrics and alerts that are registered in the Cluster Monitoring Operator. | [
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"az login",
"az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1",
"az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".url')",
"BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.aarch64.vhd",
"end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`",
"sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`",
"az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}",
"az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy",
"{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }",
"az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}",
"az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2",
"RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)",
"az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}",
"az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0",
"/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0",
"az login",
"az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1",
"az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".url')",
"BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.x86_64.vhd",
"end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`",
"sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`",
"az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}",
"az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy",
"{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }",
"az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}",
"az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2",
"RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)",
"az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}",
"az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-x86_64 -e 1.0.0",
"/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-x86_64/versions/1.0.0",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: \"<zone>\"",
"oc create -f <file_name> 1",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-subnet-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data",
"oc get -o jsonpath=\"{.status.infrastructureName}{'\\n'}\" infrastructure cluster",
"oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.<arch>.images.aws.regions.\"<region>\".image'",
"oc create -f <file_name> 1",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.images.gcp'",
"\"gcp\": { \"release\": \"415.92.202309142014-0\", \"project\": \"rhcos-cloud\", \"name\": \"rhcos-415-92-202309142014-0-gcp-aarch64\" }",
"projects/<project>/global/images/<image_name>",
"oc create -f <file_name> 1",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<http_server>/worker.ign",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ipl c",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<http_server>/worker.ign",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')",
"virt-install --connect qemu:///system --name <vm_name> --autostart --os-variant rhel9.4 \\ 1 --cpu host --vcpus <vcpus> --memory <memory_mb> --disk <vm_name>.qcow2,size=<image_size> --network network=<virt_network_parm> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 2 --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/vda\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/worker.ign \" \\ 3 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 4 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none\" \\ 5 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --noautoconsole --wait",
"osinfo-query os -f short-id",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.30.3 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.30.3 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.30.3 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.30.3 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.30.3 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.30.3 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.30.3 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9",
"apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # spec: # template: # spec: # taints: - effect: NoSchedule key: multiarch.openshift.io/arch value: arm64",
"oc adm taint nodes <node-name> multiarch.openshift.io/arch=arm64:NoSchedule",
"oc annotate namespace my-namespace 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"multiarch.openshift.io/arch\"}]'",
"apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"",
"apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"",
"oc label node <node_name> <label>",
"oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages=",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-64k-pages spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-64k-pages nodeSelector: matchLabels: node-role.kubernetes.io/worker-64k-pages: \"\" kubernetes.io/arch: arm64",
"oc create -f <filename>.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker-64k-pages\" 1 name: 99-worker-64kpages spec: kernelType: 64k-pages 2",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m",
"oc patch is/cli-artifacts -n openshift -p '{\"spec\":{\"tags\":[{\"name\":\"latest\",\"importPolicy\":{\"importMode\":\"PreserveOriginal\"}}]}}'",
"oc get istag cli-artifacts:latest -n openshift -oyaml",
"dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux",
"oc create ns openshift-multiarch-tuning-operator",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: {}",
"oc create -f <file_name> 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: channel: stable name: multiarch-tuning-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic startingCSV: multiarch-tuning-operator.<version>",
"oc create -f <file_name> 1",
"oc get csv -n openshift-multiarch-tuning-operator",
"NAME DISPLAY VERSION REPLACES PHASE multiarch-tuning-operator.<version> Multiarch Tuning Operator <version> multiarch-tuning-operator.1.0.0 Succeeded",
"oc get operatorgroup -n openshift-multiarch-tuning-operator",
"NAME AGE openshift-multiarch-tuning-operator-q8zbb 133m",
"oc get subscription -n openshift-multiarch-tuning-operator",
"NAME PACKAGE SOURCE CHANNEL multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable",
"apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster 1 spec: logVerbosityLevel: Normal 2 namespaceSelector: 3 matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: 4 nodeAffinityScoring: 5 enabled: true 6 platforms: 7 - architecture: amd64 8 weight: 100 9 - architecture: arm64 weight: 50",
"namespaceSelector: matchExpressions: - key: multiarch.openshift.io/include-pod-placement operator: Exists",
"apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster spec: logVerbosityLevel: Normal namespaceSelector: matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: nodeAffinityScoring: enabled: true platforms: - architecture: amd64 weight: 100 - architecture: arm64 weight: 50",
"oc create -f <file_name> 1",
"oc get clusterpodplacementconfig",
"NAME AGE cluster 29s",
"oc delete clusterpodplacementconfig cluster",
"oc get clusterpodplacementconfig",
"No resources found",
"oc get subscription.operators.coreos.com -n <namespace> 1",
"NAME PACKAGE SOURCE CHANNEL openshift-multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable",
"oc get subscription.operators.coreos.com <subscription_name> -n <namespace> -o yaml | grep currentCSV 1",
"currentCSV: multiarch-tuning-operator.<version>",
"oc delete subscription.operators.coreos.com <subscription_name> -n <namespace> 1",
"subscription.operators.coreos.com \"openshift-multiarch-tuning-operator\" deleted",
"oc delete clusterserviceversion <currentCSV_value> -n <namespace> 1",
"clusterserviceversion.operators.coreos.com \"multiarch-tuning-operator.<version>\" deleted",
"oc get csv -n <namespace> 1",
"No resources found in openshift-multiarch-tuning-operator namespace."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/postinstallation_configuration/configuring-multi-architecture-compute-machines-on-an-openshift-cluster |
7.168. pulseaudio | 7.168. pulseaudio 7.168.1. RHBA-2015:0655 - pulseaudio bug fix update Updated pulseaudio packages that fix several bugs are now available for Red Hat Enterprise Linux 6. PulseAudio is a sound server for Linux and other Unix-like operating systems. It is intended to be an improved drop-in replacement for the Enlightened Sound Daemon (ESOUND). Bug Fixes BZ# 812444 Previously, the pulseaudio(1) man page did not mention the PulseAudio cookie file. As a consequence, if a user wanted to connect to the audio server but was logged in with a different user and cookie, the connection failed, and it was not clear from the documentation what the user must do. With this update, the man page has been improved, and the necessary steps can be found there. BZ# 1111375 Prior to this update, certain applications that require lower audio latency produced low-quality sound when using the PulseAudio "combine" module. With this update, the "combine" module uses automatically adjusted audio latency instead of fixed high audio latency. As a result, sound quality is no longer affected when using low-latency applications with the "combine" module. BZ# 1110950 Previously, the following warning message was displayed during the booting process when using PulseAudio : udevd[PID]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules' The invalid parameter that caused this problem has been removed from PulseAudio udev rules, and the warning message no longer appears. Users of pulseaudio are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-pulseaudio |
Machine APIs | Machine APIs OpenShift Container Platform 4.16 Reference guide for machine APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_apis/index |
7.67. glibc | 7.67. glibc 7.67.1. RHBA-2013:0279 - glibc bug fix update Updated glibc packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The glibc packages provide the standard C and standard math libraries, which are used by multiple programs on the system. These libraries are required for the Linux system to function correctly. Bug Fixes BZ#804686 Prior to this update, a logic error caused the DNS code of glibc to incorrectly handle rejected responses from DNS servers. As a consequence, additional servers in the /etc/resolv.conf file could not be searched after one server responded with a REJECT. This update modifies the logic in the DNS. Now, glibc cycles through the servers listed in /etc/resolv.conf even if one returns a REJECT response. BZ#806404 Prior to this update, the nss/getnssent.c file contained an unchecked malloc call and an incorrect loop test. As a consequence, glibc could abort unexpectedly. This update modifies the malloc call and the loop test. BZ# 809726 Prior to this update, locale data for the characters in the range a-z were incorrect in the Finnish locale. As a consequence, some characters in the range a-z failed to print correctly in the Finnish locale. This update modifies the underlying code to provide the correct output for these characters. Now, characters in the Finnish locale print as expected. BZ#823909 If a file or a string was in the IBM-930 encoding, and contained the invalid multibyte character "0xffff", attempting to use iconv() (or the iconv command) to convert that file or string to another encoding, such as UTF-8, resulted in a segmentation fault. Now, the conversion code for the IBM-930 encoding recognizes this invalid character and calls an error handler, rather than causing a segmentation fault. BZ#826149 Prior to this update, the fnmatch() function failed with the return value -1 when the wildcard character "*" was part of the pattern argument and the file name argument contained an invalid multibyte encoding. This update modifies the fnmatch() code to recognize this case. Now, the invalid characters are treated as not matching and then the process proceeds. BZ#827362 Prior to this update, the internal FILE offset was set incorrectly in wide character streams. As a consequence, the offset returned by ftell was incorrect. In some cases, this could result in over-writing data. This update modifies the ftell code to correctly set the internal FILE offset field for wide characters. Now, ftell and fseek handle the offset as expected. BZ#829222 Prior to this update, the /etc/rpc file was not set as a configuration file in the glibc build. As a consequence, updating glibc caused the /etc/rpc file to be replaced without warning or creating a backup copy. This update correctly marks /etc/rpc as a configuration file. Now, the existing /etc/rpc file is left in place, and the bundled version can be installed in /etc/rpc.rpmnew . BZ#830127 Prior to this update, the vfprintf command returned the wrong error codes when encountering an overflow. As a consequence, applications which checked return codes from vfprintf could get unexpected values. This update modifies the error codes for overflow situations. BZ#832516 Prior to this update, the newlocale flag relied entirely on failure of an underlying open() call to set the errno variable for an incorrect locale name. As a consequence, the newlocale() function did not set the errno variable to an appropriate value when failing, if it has already been asked about the same incorrect locale name. This update modifies the logic in the loadlocale call so that subsequent attempts to load a non-existent locale more than once always set the errno variable appropriately. BZ# 832694 Prior to this update, the ESTALE error message referred only to NFS file systems. As a consequence, users were confused when non- NFS file systems triggered this error. This update modifies the error message to apply the error message to all file systems that can trigger this error. BZ#835090 Prior to this update, an internal array of name servers was only partially initialized when the /etc/resolv.conf file contained IPV6 name servers. As a consequence, applications could, depending on the exact contents of a nearby structure, abort. This update modifies the underlying code to handle IPV6 name servers listed in /etc/resolv.conf . BZ# 837695 Prior to this update, a buffer in the resolver code for glibc was too small to handle results for certain DNS queries. As a consequence, the query had to be repeated after a larger buffer was allocated and wasted time and network bandwidth. This update enlarges the buffer to handle the larger DNS results. BZ#837918 Prior to this update, the logic for the functions exp , exp2 , pow , sin , tan , and rint was erroneous. As a consequence, these functions could fail when running them in the non-default rounding mode. With this update, the functions return correct results across all 4 different rounding modes. BZ# 841787 Prior to this update, glibc incorrectly handled the options rotate option in the /etc/resolv.conf file if this file also contained one or more IPv6 name servers. As a consequence, DNS queries could unexpectedly fail, particularly when multiple queries were issued by a single process. This update modifies the internalization of the listed servers from /etc/resolv.conf into internal structures of glibc , as well as the sorting and rotation of those structures to implement the options rotate capability. Now, DNS names are resolved correctly in glibc . BZ#846342 Prior to this update, certain user-defined 32 bit executables could issue calls to the memcpy() function with overlapping arguments. As a consequence, the applications invoked undefined behavior and could fail. With this update, users with 32 bit applications which issue the memcpy function with overlapping arguments can create the /etc/sysconfig/32bit_ssse3_memcpy_via_32bit_ssse3_memmove . If this file exists, glibc redirects all calls to the SSSE3 memcpy copiers to the SSSE3 memmove copier, which is tolerant of overlapping arguments. Important We strongly encourage customers to identify and fix these problems in their source code. Overlapping arguments to memcpy() is a clear violation of the ANSI/ISO standards and Red Hat does not provide binary compatibility for applications which violate these standards. BZ#847932 Prior to this update, the strtod() , strtof() , and strtold() functions to convert a string to a numeric representation in glibc contained multiple integer overflow flaws. This caused stack-based buffer overflows. As a consequence, these functions could cause an application to abort or, under certain circumstances, execute arbitrary code. This update modifies the underlying code to avoid these faults. BZ#848082 Prior to this update, the setlocale() function failed to detect memory allocation problems. As a consequence, the setlocale() function eventually core dumped, due to NULL pointers or uninitialized strings. This update modifies the setlocale code to insure that memory allocation succeeded. Now, the setlocale() function no longer core dumps. BZ#849651 Prior to this update, the expf() function was considerably slowed down when saving and restoring the FPU state. This update adds a hand optimized assembler implementation of the expf() function for Intel 64 and AMD64 platforms. Now, the expf() function is considerably faster. BZ# 852445 Prior to this update, the PowerPC specific pthread_once code did not correctly publish changes it made. As a consequence, the changes were not visible to other threads at the right time. This update adds release barriers to the appropriate thread code to ensure correct synchronization of data between multiple threads. BZ# 861167 This update adds the MADV_DONTDUMP and MADV_DODUMP macros to the mman.h file to compile code that uses these macros. BZ#863453 Prior to this update, the nscd daemon attempted to free a pointer that was not provided by the malloc() function, due to an error in the memory management in glibc . As a consequence, nscd could terminate unexpectedly, when handling groups with a large number of members. This update ensures that memory allocated by the pool allocator is no longer passed to free . Now, the pool allocator's garbage collector reclaims the memory. As a result, nscd no longer crashes on groups with a large number of members. BZ# 864322 Prior to this update, the IPTOS_CLASS definition referenced the wrong object. As a consequence, applications that referenced the IPTOS_CLASS definition from the ip.h file did not build or failed to operate as expected. This update modifies the definition to reference the right object and applications that reference to the IPTOS_CLASS definition. Users of glibc are advised to upgrade to these updated packages, which fix these bugs ... 7.67.2. RHBA-2013:1179 - glibc bug fix update Updated glibc packages that fix one bug are now available for Red Hat Enterprise Linux 6. The glibc packages provide the standard C libraries (libc), POSIX thread libraries (libpthread), standard math libraries (libm), and the Name Server Caching Daemon (nscd) used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly. Bug Fix BZ# 989558 The C library security framework was unable to handle dynamically loaded character conversion routines when loaded at specific virtual addresses. This resulted in an unexpected termination with a segmentation fault when trying to use the dynamically loaded character conversion routine. This update enhances the C library security framework to handle dynamically loaded character conversion routines at any virtual memory address, and crashes no longer occur in the described scenario. Users of glibc are advised to upgrade to these updated packages, which fix this bug. 7.67.3. RHBA-2013:1046 - glibc bug fix update Updated glibc packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The glibc packages provide the standard C and standard math libraries used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly. Bug Fixes BZ# 964044 A fix to prevent logic errors in various mathematical functions, including exp, exp2, expf, exp2f, pow, sin, tan, and rint, caused by inconsistent results when the functions were used with the non-default rounding mode, creates performance regressions for certain inputs. The performance regressions have been analyzed and the core routines have been optimized to bring performance back to reasonable levels. BZ# 970992 A program that opens and uses dynamic libraries which use thread-local storage variables may terminate unexpectedly with a segmentation fault when it is being audited by a module that also uses thread-local storage. This update modifies the dynamic linker to detect such a condition, and crashes no longer occur in the described scenario. Users of glibc are advised to upgrade to these updated packages, which fix these bugs. 7.67.4. RHBA-2013:1421 - glibc bug fix update Updated glibc packages that fix one bug are now available for Red Hat Enterprise Linux 6. The glibc packages provide the standard C libraries (libc), POSIX thread libraries (libpthread), standard math libraries (libm), and the name service cache daemon (nscd) used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly. Bug Fix BZ# 1001050 A defect in the name service cache daemon (nscd) caused cached DNS queries, under certain conditions, to return only IPv4 addresses when querying for an address using the AF_UNSPEC address family, even though IPv4 and IPv6 results existed. The defect has been corrected and nscd correctly returns both IPv4 and IPv6 results if they both exist. Users of glibc are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/glibc |
10.4. Defining Role-Based Access Controls | 10.4. Defining Role-Based Access Controls Role-based access control grants a very different kind of authority to users compared to self-service and delegation access controls. Role-based access controls are fundamentally administrative, providing the ability to modify entries. There are three parts to role-based access controls: the permission , the privilege and the role . A privilege consists of one or more permissions, and a role consists of one or more privileges. A permission defines a specific operation or set of operations (such as read, write, add, or delete) and the target entries within the IdM LDAP directory to which those operations apply. Permissions are building blocks; they can be assigned to multiple privileges as needed. With IdM permissions, you can control which users have access to which objects and even which attributes of these objects. IdM enables you to whitelist or blacklist individual attributes or change the entire visibility of a specific IdM function, such as users, groups, or sudo, to all anonymous users, all authenticated users, or just a certain group of privileged users. This flexible approach to permissions is useful in scenarios when, for example, the administrator wants to limit access of users or groups only to the specific sections these users or groups need to access and to make the other sections completely hidden to them. A privilege is a group of permissions that can be applied to a role. For example, a permission can be created to add, edit, and delete automount locations. Then that permission can be combined with another permission relating to managing FTP services, and they can be used to create a single privilege that relates to managing filesystems. Note A privilege, in the context of Red Hat Identity Management, has a very specific meaning of an atomic unit of access control on which permissions and then roles are created. Privilege escalation as a concept of regular users temporarily gaining additional privileges does not exist in Red Hat Identity Management. Privileges are assigned to users by using Role-Based Access Controls (RBAC). Users either have the role that grants access, or they do not. Apart from users, privileges are also assigned to user groups, hosts, host groups and network services. This practice permits a fine-grained control of operations by a set of users on a set of hosts via specific network services. A role is a list of privileges which users specified for the role possess. Important Roles are used to classify permitted actions. They are not used as a tool to implement privilege separation or to protect from privilege escalation. It is possible to create entirely new permissions, as well as to create new privileges based on existing permissions or new permissions. Red Hat Identity Management provides the following range of pre-defined roles. Table 10.1. Predefined Roles in Red Hat Identity Management Role Privilege Description Helpdesk Modify Users and Reset passwords, Modify Group membership Responsible for performing simple user administration tasks IT Security Specialist Netgroups Administrators, HBAC Administrator, Sudo Administrator Responsible for managing security policy such as host-based access controls, sudo rules IT Specialist Host Administrators, Host Group Administrators, Service Administrators, Automount Administrators Responsible for managing hosts Security Architect Delegation Administrator, Replication Administrators, Write IPA Configuration, Password Policy Administrator Responsible for managing the Identity Management environment, creating trusts, creating replication agreements User Administrator User Administrators, Group Administrators, Stage User Administrators Responsible for creating users and groups 10.4.1. Roles 10.4.1.1. Creating Roles in the Web UI Open the IPA Server tab in the top menu, and select the Role-Based Access Control subtab. Click the Add link at the top of the list of the role-based access control instructions. Figure 10.6. Adding a New Role Enter the role name and a description. Figure 10.7. Form for Adding a Role Click the Add and Edit button to save the new role and go to the configuration page. At the top of the Users tab, or in the Users Groups tab when adding groups, click Add . Figure 10.8. Adding Users Select the users on the left and use the > button to move them to the Prospective column. Figure 10.9. Selecting Users At the top of the Privileges tab, click Add . Figure 10.10. Adding Privileges Select the privileges on the left and use the > button to move them to the Prospective column. Figure 10.11. Selecting Privileges Click the Add button to save. 10.4.1.2. Creating Roles in the Command Line Add the new role: Add the required privileges to the role: Add the required groups to the role. In this case, we are adding only a single group, useradmins , which already exists. 10.4.2. Permissions 10.4.2.1. Creating New Permissions from the Web UI Open the IPA Server tab in the top menu, and select the Role-Based Access Control subtab. Select the Permissions task link. Figure 10.12. Permissions Task Click the Add button at the top of the list of the permissions. Figure 10.13. Adding a New Permission Define the properties for the new permission in the form that shows up. Figure 10.14. Form for Adding a Permission Click the Add button under the form to save the permission. You can specify the following permission properties: Enter the name of the new permission. Select the appropriate Bind rule type : permission is the default permission type, granting access through privileges and roles all specifies that the permission applies to all authenticated users anonymous specifies that the permission applies to all users, including unauthenticated users Note It is not possible to add permissions with a non-default bind rule type to privileges. You also cannot set a permission that is already present in a privilege to a non-default bind rule type. Choose the rights that the permission grants in Granted rights . Define the method to identify the target entries for the permission: Type specifies an entry type, such as user, host, or service. If you choose a value for the Type setting, a list of all possible attributes which will be accessible through this ACI for that entry type appears under Effective Attributes . Defining Type sets Subtree and Target DN to one of the predefined values. Subtree specifies a subtree entry; every entry beneath this subtree entry is then targeted. Provide an existing subtree entry, as Subtree does not accept wildcards or non-existent domain names (DNs). For example: Extra target filter uses an LDAP filter to identify which entries the permission applies to. The filter can be any valid LDAP filter, for example: IdM automatically checks the validity of the given filter. If you enter an invalid filter, IdM warns you about this after you attempt to save the permission. Target DN specifies the domain name (DN) and accepts wildcards. For example: Member of group sets the target filter to members of the given group. After you fill out the filter settings and click Add , IdM validates the filter. If all the permission settings are correct, IdM will perform the search. If some of the permissions settings are incorrect, IdM will display a message informing you about which setting is set incorrectly. If you set Type , choose the Effective attributes from the list of available ACI attributes. If you did not use Type , add the attributes manually by writing them into the Effective attributes field. Add a single attribute at a time; to add multiple attributes, click Add to add another input field. Important If you do not set any attributes for the permission, then all attributes are included by default. 10.4.2.2. Creating New Permissions from the Command Line To add a new permission, issue the ipa permission-add command. Specify the properties of the permission by supplying the corresponding options: Supply the name of the permission. For example: --bindtype specifies the bind rule type. This options accepts the all , anonymous , and permission arguments. For example: If you do not use --bindtype , the type is automatically set to the default permission value. Note It is not possible to add permissions with a non-default bind rule type to privileges. You also cannot set a permission that is already present in a privilege to a non-default bind rule type. --permissions lists the rights granted by the permission. You can set multiple attributes by using multiple --permissions options or by listing the options in a comma-separated list inside curly braces. For example: --attrs gives the list of attributes over which the permission is granted. You can set multiple attributes by using multiple --attrs options or by listing the options in a comma-separated list inside curly braces. For example: The attributes provided with --attrs must exist and be allowed attributes for the given object type, otherwise the command fails with schema syntax errors. --type defines the entry object type, such as user, host, or service. Each type has its own set of allowed attributes. For example: --subtree gives a subtree entry; the filter then targets every entry beneath this subtree entry. Provide an existing subtree entry; --subtree does not accept wildcards or non-existent domain names (DNs). Include a DN within the directory. Because IdM uses a simplified, flat directory tree structure, --subtree can be used to target some types of entries, like automount locations, which are containers or parent entries for other configuration. For example: The --type and --subtree options are mutually exclusive. --filter uses an LDAP filter to identify which entries the permission applies to. IdM automatically checks the validity of the given filter. The filter can be any valid LDAP filter, for example: --memberof sets the target filter to members of the given group after checking that the group exists. For example: --targetgroup sets target to the specified user group after checking that the group exists. The Target DN setting, available in the web UI, is not available on the command line. Note For information about modifying and deleting permissions, run the ipa permission-mod --help and ipa permission-del --help commands. 10.4.2.3. Default Managed Permissions Managed permissions are permissions that come preinstalled with Identity Management. They behave like other permissions created by the user, with the following differences: You cannot modify their name, location, and target attributes. You cannot delete them. They have three sets of attributes: default attributes, which are managed by IdM and the user cannot modify them included attributes, which are additional attributes added by the user; to add an included attribute to a managed permission, specify the attribute by supplying the --includedattrs option with the ipa permission-mod command excluded attributes, which are attributes removed by the user; to add an excluded attribute to a managed permission, specify the attribute by supplying the --excludedattrs option with the ipa permission-mod command A managed permission applies to all attributes that appear in the default and included attribute sets but not in the excluded set. If you use the --attrs option when modifying a managed permission, the included and excluded attribute sets automatically adjust, so that only the attributes supplied with --attrs are enabled. Note While you cannot delete a managed permission, setting its bind type to permission and removing the managed permission from all privileges effectively disables it. Names of all managed permissions start with System: , for example System: Add Sudo rule or System: Modify Services . Earlier versions of IdM used a different scheme for default permissions, which, for example, forbade the user from modifying the default permissions and the user could only assign them to privileges. Most of these default permissions have been turned into managed permissions, however, the following permissions still use the scheme: Add Automember Rebuild Membership Task Add Replication Agreements Certificate Remove Hold Get Certificates status from the CA Modify DNA Range Modify Replication Agreements Remove Replication Agreements Request Certificate Request Certificates from a different host Retrieve Certificates from the CA Revoke Certificate Write IPA Configuration If you attempt to modify a managed permission from the web UI, the attributes that you cannot modify will be disabled. Figure 10.15. Disabled Attributes If you attempt to modify a managed permission from the command line, the system will not allow you to change the attributes that you cannot modify. For example, attempting to change a default System: Modify Users permission to apply to groups fails: You can, however, make the System: Modify Users permission not to apply to the GECOS attribute: 10.4.2.4. Permissions in Earlier Versions of Identity Management Earlier versions of Identity Management handled permissions differently, for example: The global IdM ACI granted read access to all users of the server, even anonymous ones - that is, not authenticated - users. Only write, add, and delete permission types were available. The read permission was available too, but it was of little practical value because all users, including unauthenticated ones, had read access by default. The current version of Identity Management contains options for setting permissions which are much more fine-grained: The global IdM ACI does not grant read access to unauthenticated users. It is now possible to, for example, add both a filter and a subtree in the same permission. It is possible to add search and compare rights. The new way of handling permissions has significantly improved the IdM capabilities for controlling user or group access, while retaining backward compatibility with the earlier versions. Upgrading from an earlier version of IdM deletes the global IdM ACI on all servers and replaces it with managed permissions . Permissions created in the way are automatically converted to the current style whenever you modify them. If you do not attempt to change them, the permissions of the type stay unconverted. Once a permission uses the current style, it can never downgrade to the style. Note It is still possible to assign permissions to privileges on servers running an earlier version of IdM. The ipa permission-show and ipa permission-find commands recognize both the current permissions and the permissions of the style. While the outputs from both of these commands display permissions in the current style, the permissions themselves remain unchanged; the commands upgrade the permission entries before outputting the data only in memory, without committing the changes to LDAP. Permissions with both the and the current characteristics have effect on all servers - those running versions of IdM, as well as those running the current IdM version. However, you cannot create or modify permissions with the current permissions on servers running versions of IdM. 10.4.3. Privileges 10.4.3.1. Creating New Privileges from the Web UI Open the IPA Server tab in the top menu, and select the Role-Based Access Control subtab. Select the Privileges task link. Figure 10.16. Privileges Task Click the Add link at the top of the list of the privileges. Figure 10.17. Adding a New Privilege Enter the name and a description of the privilege. Figure 10.18. Form for Adding a Privilege Click the Add and Edit button to go to the privilege configuration page to add permissions. Select the Permissions tab. Click Add at the top of the list of the permissions to add permission to the privilege. Figure 10.19. Adding Permissions Click the check box by the names of the permissions to add, and use the > button to move the permissions to the Prospective column. Figure 10.20. Selecting Permissions Click the Add button to save. 10.4.3.2. Creating New Privileges from the Command Line Privilege entries are created using the privilege-add command, and then permissions are added to the privilege group using the privilege-add-permission command. Create the privilege entry. Assign the required permissions. For example: | [
"kinit admin ipa role-add --desc=\"User Administrator\" useradmin ------------------------ Added role \"useradmin\" ------------------------ Role name: useradmin Description: User Administrator",
"ipa role-add-privilege --privileges=\"User Administrators\" useradmin Role name: useradmin Description: User Administrator Privileges: user administrators ---------------------------- Number of privileges added 1 ----------------------------",
"ipa role-add-member --groups=useradmins useradmin Role name: useradmin Description: User Administrator Member groups: useradmins Privileges: user administrators ------------------------- Number of members added 1 -------------------------",
"cn=automount,dc=example,dc=com",
"(!(objectclass=posixgroup))",
"uid=*,cn=users,cn=accounts,dc=com",
"ipa permission-add \"dns admin permission\"",
"--bindtype=all",
"--permissions=read --permissions=write --permissions={read,write}",
"--attrs=description --attrs=automountKey --attrs={description,automountKey}",
"ipa permission-add \"manage service\" --permissions=all --type=service --attrs=krbprincipalkey --attrs=krbprincipalname --attrs=managedby",
"ipa permission-add \"manage automount locations\" --subtree=\"ldap://ldap.example.com:389/cn=automount,dc=example,dc=com\" --permissions=write --attrs=automountmapname --attrs=automountkey --attrs=automountInformation",
"ipa permission-add \"manage Windows groups\" --filter=\"(!(objectclass=posixgroup))\" --permissions=write --attrs=description",
"ipa permission-add ManageHost --permissions=\"write\" --subtree=cn=computers,cn=accounts,dc=testrelm,dc=com --attr=nshostlocation --memberof=admins",
"ipa permission-mod 'System: Modify Users' --type=group ipa: ERROR: invalid 'ipapermlocation': not modifiable on managed permissions",
"ipa permission-mod 'System: Modify Users' --excludedattrs=gecos ------------------------------------------ Modified permission \"System: Modify Users\"",
"[jsmith@server ~]USD ipa privilege-add \"managing filesystems\" --desc=\"for filesystems\"",
"[jsmith@server ~]USD ipa privilege-add-permission \"managing filesystems\" --permissions=\"managing automount\" --permissions=\"managing ftp services\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/defining-roles |
Chapter 7. Ingress Operator in OpenShift Container Platform | Chapter 7. Ingress Operator in OpenShift Container Platform 7.1. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 7.2. The Ingress configuration asset The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml . YAML Definition of the Ingress resource apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows: The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller. The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host. 7.3. Ingress Controller configuration parameters The IngressController custom resource (CR) includes optional configuration parameters that you can configure to meet specific needs for your organization. Parameter Description domain domain is a DNS name serviced by the Ingress Controller and is used to configure multiple features: For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy . When using a generated default certificate, the certificate is valid for domain and its subdomains . See defaultCertificate . The value is published to individual Route statuses so that users know where to target external DNS records. The domain value must be unique among all Ingress Controllers and cannot be updated. If empty, the default value is ingress.config.openshift.io/cluster .spec.domain . replicas replicas is the number of Ingress Controller replicas. If not set, the default value is 2 . endpointPublishingStrategy endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. For cloud environments, use the loadBalancer field to configure the endpoint publishing strategy for your Ingress Controller. On GCP, AWS, and Azure you can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadBalancer.allowedSourceRanges If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform : Amazon Web Services (AWS): LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) Google Cloud Platform (GCP): LoadBalancerService (with External scope) For most platforms, the endpointPublishingStrategy value can be updated. On GCP, you can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadbalancer.providerParameters.gcp.clientAccess For non-cloud environments, such as a bare-metal platform, use the NodePortService , HostNetwork , or Private fields to configure the endpoint publishing strategy for your Ingress Controller. If you do not set a value in one of these fields, the default value is based on binding ports specified in the .status.platform value in the IngressController CR. If you need to update the endpointPublishingStrategy value after your cluster is deployed, you can configure the following endpointPublishingStrategy fields: hostNetwork.protocol nodePort.protocol private.protocol defaultCertificate The defaultCertificate value is a reference to a secret that contains the default certificate that is served by the Ingress Controller. When Routes do not specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: * tls.crt : certificate file contents * tls.key : key file contents If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller domain and subdomains , and the generated certificate's CA is automatically integrated with the cluster's trust store. The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Container Platform built-in OAuth server. namespaceSelector namespaceSelector is used to filter the set of namespaces serviced by the Ingress Controller. This is useful for implementing shards. routeSelector routeSelector is used to filter the set of Routes serviced by the Ingress Controller. This is useful for implementing shards. nodePlacement nodePlacement enables explicit control over the scheduling of the Ingress Controller. If not set, the defaults values are used. Note The nodePlacement parameter includes two parts, nodeSelector and tolerations . For example: nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists tlsSecurityProfile tlsSecurityProfile specifies settings for TLS connections for Ingress Controllers. If not set, the default value is based on the apiservers.config.openshift.io/cluster resource. When using the Old , Intermediate , and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z , an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the Ingress Controller, resulting in a rollout. The minimum TLS version for Ingress Controllers is 1.1 , and the maximum TLS version is 1.3 . Note Ciphers and the minimum TLS version of the configured security profile are reflected in the TLSProfile status. Important The Ingress Operator converts the TLS 1.0 of an Old or Custom profile to 1.1 . clientTLS clientTLS authenticates client access to the cluster and services; as a result, mutual TLS authentication is enabled. If not set, then client TLS is not enabled. clientTLS has the required subfields, spec.clientTLS.clientCertificatePolicy and spec.clientTLS.ClientCA . The ClientCertificatePolicy subfield accepts one of the two values: Required or Optional . The ClientCA subfield specifies a config map that is in the openshift-config namespace. The config map should contain a CA certificate bundle. The AllowedSubjectPatterns is an optional value that specifies a list of regular expressions, which are matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. At least one pattern must match a client certificate's distinguished name; otherwise, the Ingress Controller rejects the certificate and denies the connection. If not specified, the Ingress Controller does not reject certificates based on the distinguished name. routeAdmission routeAdmission defines a policy for handling new route claims, such as allowing or denying claims across namespaces. namespaceOwnership describes how hostname claims across namespaces should be handled. The default is Strict . Strict : does not allow routes to claim the same hostname across namespaces. InterNamespaceAllowed : allows routes to claim different paths of the same hostname across namespaces. wildcardPolicy describes how routes with wildcard policies are handled by the Ingress Controller. WildcardsAllowed : Indicates routes with any wildcard policy are admitted by the Ingress Controller. WildcardsDisallowed : Indicates only routes with a wildcard policy of None are admitted by the Ingress Controller. Updating wildcardPolicy from WildcardsAllowed to WildcardsDisallowed causes admitted routes with a wildcard policy of Subdomain to stop working. These routes must be recreated to a wildcard policy of None to be readmitted by the Ingress Controller. WildcardsDisallowed is the default setting. IngressControllerLogging logging defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled. access describes how client requests are logged. If this field is empty, access logging is disabled. destination describes a destination for log messages. type is the type of destination for logs: Container specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs , on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity. Syslog specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance. container describes parameters for the Container logging destination type. Currently there are no parameters for container logging, so this field must be empty. syslog describes parameters for the Syslog logging destination type: address is the IP address of the syslog endpoint that receives log messages. port is the UDP port number of the syslog endpoint that receives log messages. maxLength is the maximum length of the syslog message. It must be between 480 and 4096 bytes. If this field is empty, the maximum length is set to the default value of 1024 bytes. facility specifies the syslog facility of log messages. If this field is empty, the facility is local1 . Otherwise, it must specify a valid syslog facility: kern , user , mail , daemon , auth , syslog , lpr , news , uucp , cron , auth2 , ftp , ntp , audit , alert , cron2 , local0 , local1 , local2 , local3 . local4 , local5 , local6 , or local7 . httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation . httpHeaders httpHeaders defines the policy for HTTP headers. By setting the forwardedHeaderPolicy for the IngressControllerHTTPHeaders , you specify when and how the Ingress Controller sets the Forwarded , X-Forwarded-For , X-Forwarded-Host , X-Forwarded-Port , X-Forwarded-Proto , and X-Forwarded-Proto-Version HTTP headers. By default, the policy is set to Append . Append specifies that the Ingress Controller appends the headers, preserving any existing headers. Replace specifies that the Ingress Controller sets the headers, removing any existing headers. IfNone specifies that the Ingress Controller sets the headers if they are not already set. Never specifies that the Ingress Controller never sets the headers, preserving any existing headers. By setting headerNameCaseAdjustments , you can specify case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying X-Forwarded-For indicates that the x-forwarded-for HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. httpCompression httpCompression defines the policy for HTTP traffic compression. mimeTypes defines a list of MIME types to which compression should be applied. For example, text/css; charset=utf-8 , text/html , text/* , image/svg+xml , application/octet-stream , X-custom/customsub , using the format pattern, type/subtype; [;attribute=value] . The types are: application, image, message, multipart, text, video, or a custom type prefaced by X- ; e.g. To see the full notation for MIME types and subtypes, see RFC1341 httpErrorCodePages httpErrorCodePages specifies custom HTTP error code response pages. By default, an IngressController uses error pages built into the IngressController image. httpCaptureCookies httpCaptureCookies specifies HTTP cookies that you want to capture in access logs. If the httpCaptureCookies field is empty, the access logs do not capture the cookies. For any cookie that you want to capture, the following parameters must be in your IngressController configuration: name specifies the name of the cookie. maxLength specifies tha maximum length of the cookie. matchType specifies if the field name of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. The matchType field uses the Exact and Prefix parameters. For example: httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE httpCaptureHeaders httpCaptureHeaders specifies the HTTP headers that you want to capture in the access logs. If the httpCaptureHeaders field is empty, the access logs do not capture the headers. httpCaptureHeaders contains two lists of headers to capture in the access logs. The two lists of header fields are request and response . In both lists, the name field must specify the header name and the maxlength field must specify the maximum length of the header. For example: httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length tuningOptions tuningOptions specifies options for tuning the performance of Ingress Controller pods. clientFinTimeout specifies how long a connection is held open while waiting for the client response to the server closing the connection. The default timeout is 1s . clientTimeout specifies how long a connection is held open while waiting for a client response. The default timeout is 30s . headerBufferBytes specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least 16384 if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is 32768 bytes. Setting this field not recommended because headerBufferBytes values that are too small can break the Ingress Controller, and headerBufferBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. headerBufferMaxRewriteBytes specifies how much memory should be reserved, in bytes, from headerBufferBytes for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for headerBufferMaxRewriteBytes is 4096 . headerBufferBytes must be greater than headerBufferMaxRewriteBytes for incoming HTTP requests. If not set, the default value is 8192 bytes. Setting this field not recommended because headerBufferMaxRewriteBytes values that are too small can break the Ingress Controller and headerBufferMaxRewriteBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. healthCheckInterval specifies how long the router waits between health checks. The default is 5s . serverFinTimeout specifies how long a connection is held open while waiting for the server response to the client that is closing the connection. The default timeout is 1s . serverTimeout specifies how long a connection is held open while waiting for a server response. The default timeout is 30s . threadCount specifies the number of threads to create per HAProxy process. Creating more threads allows each Ingress Controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to 64 threads. If this field is empty, the Ingress Controller uses the default value of 4 threads. The default value can change in future releases. Setting this field is not recommended because increasing the number of HAProxy threads allows Ingress Controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the Ingress Controller to perform poorly. tlsInspectDelay specifies how long the router can hold data to find a matching route. Setting this value too short can cause the router to fall back to the default certificate for edge-terminated, reencrypted, or passthrough routes, even when using a better matched certificate. The default inspect delay is 5s . tunnelTimeout specifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. The default timeout is 1h . maxConnections specifies the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections at the cost of additional system resources. Permitted values are 0 , -1 , any value within the range 2000 and 2000000 , or the field can be left empty. If this field is left empty or has the value 0 , the Ingress Controller will use the default value of 50000 . This value is subject to change in future releases. If the field has the value of -1 , then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. This process results in a large computed value that will incur significant memory usage compared to the current default value of 50000 . If the field has a value that is greater than the current operating system limit, the HAProxy process will not start. If you choose a discrete value and the router pod is migrated to a new node, it is possible the new node does not have an identical ulimit configured. In such cases, the pod fails to start. If you have nodes with different ulimits configured, and you choose a discrete value, it is recommended to use the value of -1 for this field so that the maximum number of connections is calculated at runtime. logEmptyRequests logEmptyRequests specifies connections for which no request is received and logged. These empty requests come from load balancer health probes or web browser speculative connections (preconnect) and logging these requests can be undesirable. However, these requests can be caused by network errors, in which case logging empty requests can be useful for diagnosing the errors. These requests can be caused by port scans, and logging empty requests can aid in detecting intrusion attempts. Allowed values for this field are Log and Ignore . The default value is Log . The LoggingPolicy type accepts either one of two values: Log : Setting this value to Log indicates that an event should be logged. Ignore : Setting this value to Ignore sets the dontlognull option in the HAproxy configuration. HTTPEmptyRequestsPolicy HTTPEmptyRequestsPolicy describes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are Respond and Ignore . The default value is Respond . The HTTPEmptyRequestsPolicy type accepts either one of two values: Respond : If the field is set to Respond , the Ingress Controller sends an HTTP 400 or 408 response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics. Ignore : Setting this option to Ignore adds the http-ignore-probes parameter in the HAproxy configuration. If the field is set to Ignore , the Ingress Controller closes the connection without sending a response, then logs the connection, or incrementing metrics. These connections come from load balancer health probes or web browser speculative connections (preconnect) and can be safely ignored. However, these requests can be caused by network errors, so setting this field to Ignore can impede detection and diagnosis of problems. These requests can be caused by port scans, in which case logging empty requests can aid in detecting intrusion attempts. 7.3.1. Ingress Controller TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server. 7.3.1.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 7.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 7.3.1.2. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 7.3.1.3. Configuring mutual TLS authentication You can configure the Ingress Controller to enable mutual TLS (mTLS) authentication by setting a spec.clientTLS value. The clientTLS value configures the Ingress Controller to verify client certificates. This configuration includes setting a clientCA value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client's certificate. Optionally, you can also configure a list of certificate subject filters. If the clientCA value specifies an X509v3 certificate revocation list (CRL) distribution point, the Ingress Operator downloads and manages a CRL config map based on the HTTP URI X509v3 CRL Distribution Point specified in each provided certificate. The Ingress Controller uses this config map during mTLS/TLS negotiation. Requests that do not provide valid certificates are rejected. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a PEM-encoded CA certificate bundle. If your CA bundle references a CRL distribution point, you must have also included the end-entity or leaf certificate to the client CA bundle. This certificate must have included an HTTP URI under CRL Distribution Points , as described in RFC 5280. For example: Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl Procedure In the openshift-config namespace, create a config map from your CA bundle: USD oc create configmap \ router-ca-certs-default \ --from-file=ca-bundle.pem=client-ca.crt \ 1 -n openshift-config 1 The config map data key must be ca-bundle.pem , and the data value must be a CA certificate in PEM format. Edit the IngressController resource in the openshift-ingress-operator project: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.clientTLS field and subfields to configure mutual TLS: Sample IngressController CR for a clientTLS profile that specifies filtering patterns apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD" Optional, get the Distinguished Name (DN) for allowedSubjectPatterns by entering the following command. 7.4. View the default Ingress Controller The Ingress Operator is a core feature of OpenShift Container Platform and is enabled out of the box. Every new OpenShift Container Platform installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute. Procedure View the default Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/default 7.5. View Ingress Operator status You can view and inspect the status of your Ingress Operator. Procedure View your Ingress Operator status: USD oc describe clusteroperators/ingress 7.6. View Ingress Controller logs You can view your Ingress Controller logs. Procedure View your Ingress Controller logs: USD oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name> 7.7. View Ingress Controller status Your can view the status of a particular Ingress Controller. Procedure View the status of an Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/<name> 7.8. Configuring the Ingress Controller 7.8.1. Setting a custom default certificate As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR). Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI. Your certificate meets the following requirements: The certificate is valid for the ingress domain. The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com . You must have an IngressController CR. You may use the default one: USD oc --namespace openshift-ingress-operator get ingresscontrollers Example output NAME AGE default 10m Note If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s). Procedure The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key . You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR. Note This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files. USD oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key Update the IngressController CR to reference the new certificate secret: USD oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}' Verify the update was effective: USD echo Q |\ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM Tip You can alternatively apply the following YAML to set a custom default certificate: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default The certificate secret name should match the value used to update the CR. Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller's deployment to use the custom certificate. 7.8.2. Removing a custom default certificate As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You previously configured a custom default certificate for the Ingress Controller. Procedure To remove the custom certificate and restore the certificate that ships with OpenShift Container Platform, enter the following command: USD oc patch -n openshift-ingress-operator ingresscontrollers/default \ --type json -p USD'- op: remove\n path: /spec/defaultCertificate' There can be a delay while the cluster reconciles the new certificate configuration. Verification To confirm that the original cluster certificate is restored, enter the following command: USD echo Q | \ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT 7.8.3. Autoscaling an Ingress Controller You can automatically scale an Ingress Controller to dynamically meet routing performance or availability requirements, such as the requirement to increase throughput. The following procedure provides an example for scaling up the default Ingress Controller. Prerequisites You have the OpenShift CLI ( oc ) installed. You have access to an OpenShift Container Platform cluster as a user with the cluster-admin role. You installed the Custom Metrics Autoscaler Operator and an associated KEDA Controller. You can install the Operator by using OperatorHub on the web console. After you install the Operator, you can create an instance of KedaController . Procedure Create a service account to authenticate with Thanos by running the following command: USD oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos Example output Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: thanos-token-c422q Events: <none> Manually create the service account secret token with the following command: USD oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF Define a TriggerAuthentication object within the openshift-ingress-operator namespace by using the service account's token. Define the secret variable that contains the secret by running the following command: USD secret=USD(oc get secret -n openshift-ingress-operator | grep thanos-token | head -n 1 | awk '{ print USD1 }') Create the TriggerAuthentication object and pass the value of the secret variable to the TOKEN parameter: USD oc process TOKEN="USDsecret" -f - <<EOF | oc apply -n openshift-ingress-operator -f - apiVersion: template.openshift.io/v1 kind: Template parameters: - name: TOKEN objects: - apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: - parameter: bearerToken name: \USD{TOKEN} key: token - parameter: ca name: \USD{TOKEN} key: ca.crt EOF Create and apply a role for reading metrics from Thanos: Create a new role, thanos-metrics-reader.yaml , that reads metrics from pods and nodes: thanos-metrics-reader.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - "" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get Apply the new role by running the following command: USD oc apply -f thanos-metrics-reader.yaml Add the new role to the service account by entering the following commands: USD oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator USD oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos Note The argument add-cluster-role-to-user is only required if you use cross-namespace queries. The following step uses a query from the kube-metrics namespace which requires this argument. Create a new ScaledObject YAML file, ingress-autoscaler.yaml , that targets the default Ingress Controller deployment: Example ScaledObject definition apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: 1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3 namespace: openshift-ingress-operator 4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role="worker",service="kube-state-metrics"})' 5 authModes: "bearer" authenticationRef: name: keda-trigger-auth-prometheus 1 The custom resource that you are targeting. In this case, the Ingress Controller. 2 Optional: The maximum number of replicas. If you omit this field, the default maximum is set to 100 replicas. 3 The Thanos service endpoint in the openshift-monitoring namespace. 4 The Ingress Operator namespace. 5 This expression evaluates to however many worker nodes are present in the deployed cluster. Important If you are using cross-namespace queries, you must target port 9091 and not port 9092 in the serverAddress field. You also must have elevated privileges to read metrics from this port. Apply the custom resource definition by running the following command: USD oc apply -f ingress-autoscaler.yaml Verification Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands: Use the grep command to search the Ingress Controller YAML file for replicas: USD oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas: Example output replicas: 3 Get the pods in the openshift-ingress project: USD oc get pods -n openshift-ingress Example output NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s Additional resources Installing the custom metrics autoscaler Enabling monitoring for user-defined projects Understanding custom metrics autoscaler trigger authentications Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring Understanding how to add custom metrics autoscalers 7.8.4. Scaling an Ingress Controller Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController . Note Scaling is not an immediate action, as it takes time to create the desired number of replicas. Procedure View the current number of available replicas for the default IngressController : USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 2 Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge Example output ingresscontroller.operator.openshift.io/default patched Verify that the default IngressController scaled to the number of replicas that you specified: USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 3 Tip You can alternatively apply the following YAML to scale an Ingress Controller to three replicas: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1 1 If you need a different amount of replicas, change the replicas value. 7.8.5. Configuring Ingress access logging You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Container Platform, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs. Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller. Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack's capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap. Prerequisites Log in as a user with cluster-admin privileges. Procedure Configure Ingress access logging to a sidecar. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type . The following example is an Ingress Controller definition that logs to a Container destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod: USD oc -n openshift-ingress logs deployment.apps/router-default -c logs Example output 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1" Configure Ingress access logging to a Syslog endpoint. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type . If the destination type is Syslog , you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint and you can specify a facility using spec.logging.access.destination.syslog.facility . The following example is an Ingress Controller definition that logs to a Syslog destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 Note The syslog destination port must be UDP. Configure Ingress access logging with a specific log format. You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV' Disable Ingress access logging. To disable Ingress access logging, leave spec.logging or spec.logging.access empty: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null 7.8.6. Setting Ingress Controller thread count A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads. Prerequisites The following assumes that you already created an Ingress Controller. Procedure Update the Ingress Controller to increase the number of threads: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}' Note If you have a node that is capable of running large amounts of resources, you can configure spec.nodePlacement.nodeSelector with labels that match the capacity of the intended node, and configure spec.tuningOptions.threadCount to an appropriately high value. 7.8.7. Configuring an Ingress Controller to use an internal load balancer When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Figure 7.1. Diagram of LoadBalancer The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress LoadBalancerService endpoint publishing strategy: You can load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer. You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic. Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml , such as in the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3 1 Replace <name> with a name for the IngressController object. 2 Specify the domain for the application published by the controller. 3 Specify a value of Internal to use an internal load balancer. Create the Ingress Controller defined in the step by running the following command: USD oc create -f <name>-ingress-controller.yaml 1 1 Replace <name> with the name of the IngressController object. Optional: Confirm that the Ingress Controller was created by running the following command: USD oc --all-namespaces=true get ingresscontrollers 7.8.8. Configuring global access for an Ingress Controller on GCP An Ingress Controller created on GCP with an internal load balancer generates an internal IP address for the service. A cluster administrator can specify the global access option, which enables clients in any region within the same VPC network and compute region as the load balancer, to reach the workloads running on your cluster. For more information, see the GCP documentation for global access . Prerequisites You deployed an OpenShift Container Platform cluster on GCP infrastructure. You configured an Ingress Controller to use an internal load balancer. You installed the OpenShift CLI ( oc ). Procedure Configure the Ingress Controller resource to allow global access. Note You can also create an Ingress Controller and specify the global access option. Configure the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Edit the YAML file: Sample clientAccess configuration to Global spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService 1 Set gcp.clientAccess to Global . Save the file to apply the changes. Run the following command to verify that the service allows global access: USD oc -n openshift-ingress edit svc/router-default -o yaml The output shows that global access is enabled for GCP with the annotation, networking.gke.io/internal-load-balancer-allow-global-access . 7.8.9. Setting the Ingress Controller health check interval A cluster administrator can set the health check interval to define how long the router waits between two consecutive health checks. This value is applied globally as a default for all routes. The default value is 5 seconds. Prerequisites The following assumes that you already created an Ingress Controller. Procedure Update the Ingress Controller to change the interval between back end health checks: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"healthCheckInterval": "8s"}}}' Note To override the healthCheckInterval for a single route, use the route annotation router.openshift.io/haproxy.health.check.interval 7.8.10. Configuring the default Ingress Controller for your cluster to be internal You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF 7.8.11. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 7.8.12. Using wildcard routes The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller. The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None , which is backwards compatible with existing IngressController resources. Procedure Configure the wildcard policy. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed : spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed 7.8.13. Using X-Forwarded headers You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For . The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller. Procedure Configure the HTTPHeaders field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the HTTPHeaders policy field to Append , Replace , IfNone , or Never : apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append Example use cases As a cluster administrator, you can: Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller. To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides. Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified. To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header. As an application developer, you can: Configure an application-specific external proxy that injects the X-Forwarded-For header. To configure an Ingress Controller to pass the header through unmodified for an application's Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application. Note You can set the haproxy.router.openshift.io/set-forwarded-headers annotation on a per route basis, independent from the globally set value for the Ingress Controller. 7.8.14. Enabling HTTP/2 Ingress connectivity You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes. Warning Using WebSockets with a re-encrypt route and with HTTP/2 enabled on an Ingress Controller requires WebSocket support over HTTP/2. WebSockets over HTTP/2 is a feature of HAProxy 2.4, which is unsupported in OpenShift Container Platform at this time. Important For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. Procedure Enable HTTP/2 on a single Ingress Controller. To enable HTTP/2 on an Ingress Controller, enter the oc annotate command: USD oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true Replace <ingresscontroller_name> with the name of the Ingress Controller to annotate. Enable HTTP/2 on the entire cluster. To enable HTTP/2 for the entire cluster, enter the oc annotate command: USD oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: "true" 7.8.15. Configuring the PROXY protocol for an Ingress Controller A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the HostNetwork or NodePortService endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer. This feature is not supported in cloud deployments. This restriction is because when OpenShift Container Platform runs in a cloud platform, and an IngressController specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses. Important You must configure both OpenShift Container Platform and the external load balancer to either use the PROXY protocol or to use TCP. Warning The PROXY protocol is unsupported for the default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress VIP. Prerequisites You created an Ingress Controller. Procedure Edit the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Set the PROXY configuration: If your Ingress Controller uses the hostNetwork endpoint publishing strategy type, set the spec.endpointPublishingStrategy.hostNetwork.protocol subfield to PROXY : Sample hostNetwork configuration to PROXY spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork If your Ingress Controller uses the NodePortService endpoint publishing strategy type, set the spec.endpointPublishingStrategy.nodePort.protocol subfield to PROXY : Sample nodePort configuration to PROXY spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService 7.8.16. Specifying an alternative cluster domain using the appsDomain option As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain field. The appsDomain field is an optional domain for OpenShift Container Platform to use instead of the default, which is specified in the domain field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route. For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc command line interface. Procedure Configure the appsDomain field by specifying an alternative default domain for user-created routes. Edit the ingress cluster resource: USD oc edit ingresses.config/cluster -o yaml Edit the YAML file: Sample appsDomain configuration to test.example.com apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2 1 Specifies the default domain. You cannot modify the default domain after installation. 2 Optional: Domain for OpenShift Container Platform infrastructure to use for application routes. Instead of the default prefix, apps , you can use an alternative prefix like test . Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change: Note Wait for the openshift-apiserver finish rolling updates before exposing the route. Expose the route: USD oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed Example output: USD oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None 7.8.17. Converting HTTP header case HAProxy lowercases HTTP header names by default; for example, changing Host: xyz.com to host: xyz.com . If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller spec.httpHeaders.headerNameCaseAdjustments API field for a solution to accommodate legacy applications until they can be fixed. Important OpenShift Container Platform includes HAProxy 2.2. If you want to update to this version of the web-based load balancer, ensure that you add the spec.httpHeaders.headerNameCaseAdjustments section to your cluster's configuration file. As a cluster administrator, you can convert the HTTP header case by entering the oc patch command or by setting the HeaderNameCaseAdjustments field in the Ingress Controller YAML file. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Capitalize an HTTP header by using the oc patch command. Change the HTTP header from host to Host by running the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}' Create a Route resource YAML file so that the annotation can be applied to the application. Example of a route named my-application apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: <application_name> namespace: <application_name> # ... 1 Set haproxy.router.openshift.io/h1-adjust-case so that the Ingress Controller can adjust the host request header as specified. Specify adjustments by configuring the HeaderNameCaseAdjustments field in the Ingress Controller YAML configuration file. The following example Ingress Controller YAML file adjusts the host header to Host for HTTP/1 requests to appropriately annotated routes: Example Ingress Controller YAML apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host The following example route enables HTTP response header name case adjustments by using the haproxy.router.openshift.io/h1-adjust-case annotation: Example route YAML apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application 1 Set haproxy.router.openshift.io/h1-adjust-case to true. 7.8.18. Using router compression You configure the HAProxy Ingress Controller to specify router compression globally for specific MIME types. You can use the mimeTypes variable to define the formats of MIME types to which compression is applied. The types are: application, image, message, multipart, text, video, or a custom type prefaced by "X-". To see the full notation for MIME types and subtypes, see RFC1341 . Note Memory allocated for compression can affect the max connections. Additionally, compression of large buffers can cause latency, like heavy regex or long lists of regex. Not all MIME types benefit from compression, but HAProxy still uses resources to try to compress if instructed to. Generally, text formats, such as html, css, and js, formats benefit from compression, but formats that are already compressed, such as image, audio, and video, benefit little in exchange for the time and resources spent on compression. Procedure Configure the httpCompression field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit -n openshift-ingress-operator ingresscontrollers/default Under spec , set the httpCompression policy field to mimeTypes and specify a list of MIME types that should have compression applied: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - "text/html" - "text/css; charset=utf-8" - "application/json" ... 7.8.19. Exposing router metrics You can expose the HAProxy router metrics by default in Prometheus format on the default stats port, 1936. The external metrics collection and aggregation systems such as Prometheus can access the HAProxy router metrics. You can view the HAProxy router metrics in a browser in the HTML and comma separated values (CSV) format. Prerequisites You configured your firewall to access the default stats port, 1936. Procedure Get the router pod name by running the following command: USD oc get pods -n openshift-ingress Example output NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h Get the router's username and password, which the router pod stores in the /var/lib/haproxy/conf/metrics-auth/statsUsername and /var/lib/haproxy/conf/metrics-auth/statsPassword files: Get the username by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsUsername Get the password by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsPassword Get the router IP and metrics certificates by running the following command: USD oc describe pod <router_pod> Get the raw statistics in Prometheus format by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Access the metrics securely by running the following command: USD curl -u user:password https://<router_IP>:<stats_port>/metrics -k Access the default stats port, 1936, by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Example 7.1. Example output ... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ... Launch the stats window by entering the following URL in a browser: http://<user>:<password>@<router_IP>:<stats_port> Optional: Get the stats in CSV format by entering the following URL in a browser: http://<user>:<password>@<router_ip>:1936/metrics;csv 7.8.20. Customizing HAProxy error code response pages As a cluster administrator, you can specify a custom error code response page for either 503, 404, or both error pages. The HAProxy router serves a 503 error page when the application pod is not running or a 404 error page when the requested URL does not exist. For example, if you customize the 503 error code response page, then the page is served when the application pod is not running, and the default 404 error code HTTP response page is served by the HAProxy router for an incorrect route or a non-existing route. Custom error code response pages are specified in a config map then patched to the Ingress Controller. The config map keys have two available file names as follows: error-page-503.http and error-page-404.http . Custom HTTP error code response pages must follow the HAProxy HTTP error page configuration guidelines . Here is an example of the default OpenShift Container Platform HAProxy router http 503 error code response page . You can use the default content as a template for creating your own custom page. By default, the HAProxy router serves only a 503 error page when the application is not running or when the route is incorrect or non-existent. This default behavior is the same as the behavior on OpenShift Container Platform 4.8 and earlier. If a config map for the customization of an HTTP error code response is not provided, and you are using a custom HTTP error code response page, the router serves a default 404 or 503 error code response page. Note If you use the OpenShift Container Platform default 503 error code page as a template for your customizations, the headers in the file require an editor that can use CRLF line endings. Procedure Create a config map named my-custom-error-code-pages in the openshift-config namespace: USD oc -n openshift-config create configmap my-custom-error-code-pages \ --from-file=error-page-503.http \ --from-file=error-page-404.http Important If you do not specify the correct format for the custom error code response page, a router pod outage occurs. To resolve this outage, you must delete or correct the config map and delete the affected router pods so they can be recreated with the correct information. Patch the Ingress Controller to reference the my-custom-error-code-pages config map by name: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge The Ingress Operator copies the my-custom-error-code-pages config map from the openshift-config namespace to the openshift-ingress namespace. The Operator names the config map according to the pattern, <your_ingresscontroller_name>-errorpages , in the openshift-ingress namespace. Display the copy: USD oc get cm default-errorpages -n openshift-ingress Example output 1 The example config map name is default-errorpages because the default Ingress Controller custom resource (CR) was patched. Confirm that the config map containing the custom error response page mounts on the router volume where the config map key is the filename that has the custom HTTP error code response: For 503 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http For 404 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http Verification Verify your custom error code HTTP response: Create a test project and application: USD oc new-project test-ingress USD oc new-app django-psql-example For 503 custom http error code response: Stop all the pods for the application. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> For 404 custom http error code response: Visit a non-existent route or an incorrect route. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> Check if the errorfile attribute is properly in the haproxy.config file: USD oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile 7.8.21. Setting the Ingress Controller maximum connections A cluster administrator can set the maximum number of simultaneous connections for OpenShift router deployments. You can patch an existing Ingress Controller to increase the maximum number of connections. Prerequisites The following assumes that you already created an Ingress Controller Procedure Update the Ingress Controller to change the maximum number of connections for HAProxy: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}' Warning If you set the spec.tuningOptions.maxConnections value greater than the current operating system limit, the HAProxy process will not start. See the table in the "Ingress Controller configuration parameters" section for more information about this parameter. 7.9. Additional resources Configuring a custom PKI | [
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com",
"nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists",
"httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE",
"httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl",
"oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt \\ 1 -n openshift-config",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"",
"openssl x509 -in custom-cert.pem -noout -subject subject= /CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/default",
"oc describe clusteroperators/ingress",
"oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>",
"oc --namespace openshift-ingress-operator get ingresscontrollers",
"NAME AGE default 10m",
"oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key",
"oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default",
"oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT",
"oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos",
"Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: thanos-token-c422q Events: <none>",
"oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF",
"secret=USD(oc get secret -n openshift-ingress-operator | grep thanos-token | head -n 1 | awk '{ print USD1 }')",
"oc process TOKEN=\"USDsecret\" -f - <<EOF | oc apply -n openshift-ingress-operator -f - apiVersion: template.openshift.io/v1 kind: Template parameters: - name: TOKEN objects: - apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: - parameter: bearerToken name: \\USD{TOKEN} key: token - parameter: ca name: \\USD{TOKEN} key: ca.crt EOF",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - \"\" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get",
"oc apply -f thanos-metrics-reader.yaml",
"oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator",
"oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: 1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3 namespace: openshift-ingress-operator 4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role=\"worker\",service=\"kube-state-metrics\"})' 5 authModes: \"bearer\" authenticationRef: name: keda-trigger-auth-prometheus",
"oc apply -f ingress-autoscaler.yaml",
"oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:",
"replicas: 3",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"2",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge",
"ingresscontroller.operator.openshift.io/default patched",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container",
"oc -n openshift-ingress logs deployment.apps/router-default -c logs",
"2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3",
"oc create -f <name>-ingress-controller.yaml 1",
"oc --all-namespaces=true get ingresscontrollers",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService",
"oc -n openshift-ingress edit svc/router-default -o yaml",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"healthCheckInterval\": \"8s\"}}}'",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"oc edit IngressController",
"spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed",
"oc edit IngressController",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork",
"spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService",
"oc edit ingresses.config/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2",
"oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: <application_name> namespace: <application_name>",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application",
"oc edit -n openshift-ingress-operator ingresscontrollers/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - \"text/html\" - \"text/css; charset=utf-8\" - \"application/json\"",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h",
"oc rsh <router_pod_name> cat metrics-auth/statsUsername",
"oc rsh <router_pod_name> cat metrics-auth/statsPassword",
"oc describe pod <router_pod>",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"curl -u user:password https://<router_IP>:<stats_port>/metrics -k",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"HELP haproxy_backend_connections_total Total number of connections. TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route-alt\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route01\"} 0 HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type=\"current\"} 11 haproxy_exporter_server_threshold{type=\"limit\"} 500 HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend=\"fe_no_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"fe_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"public\"} 119070 HELP haproxy_server_bytes_in_total Current total of incoming bytes. TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_no_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"docker-registry-5-nk5fz\",route=\"docker-registry\",server=\"10.130.0.89:5000\",service=\"docker-registry\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"hello-rc-vkjqx\",route=\"hello-route\",server=\"10.130.0.90:8080\",service=\"hello-svc-1\"} 0",
"http://<user>:<password>@<router_IP>:<stats_port>",
"http://<user>:<password>@<router_ip>:1936/metrics;csv",
"oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge",
"oc get cm default-errorpages -n openshift-ingress",
"NAME DATA AGE default-errorpages 2 25s 1",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http",
"oc new-project test-ingress",
"oc new-app django-psql-example",
"curl -vk <route_hostname>",
"curl -vk <route_hostname>",
"oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"maxConnections\": 7500}}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/configuring-ingress |
Chapter 44. Networking | Chapter 44. Networking Cisco usNIC driver Cisco Unified Communication Manager (UCM) servers have an optional feature to provide a Cisco proprietary User Space Network Interface Controller (usNIC), which allows performing Remote Direct Memory Access (RDMA)-like operations for user-space applications. The libusnic_verbs driver, which is supported as a Technology Preview, makes it possible to use usNIC devices via standard InfiniBand RDMA programming based on the Verbs API. (BZ#916384) Cisco VIC kernel driver The Cisco VIC Infiniband kernel driver, which is supported as a Technology Preview, allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures. (BZ#916382) Trusted Network Connect Trusted Network Connect, supported as a Technology Preview, is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint's system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network. (BZ#755087) SR-IOV functionality in the qlcnic driver Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported. (BZ#1259547) New packages: libnftnl , nftables As a Technology Preview, this update adds the nftables and libnftl packages. The nftables packages provide a packet-filtering tool, with numerous improvements in convenience, features, and performance over packet-filtering tools. It is the designated successor to the iptables , ip6tables , arptables , and ebtables utilities. The libnftnl packages provide a library for low-level interaction with nftables Netlink's API over the libmnl library. (BZ# 1332585 , BZ#1332581) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/technology_previews_networking |
Chapter 4. Testing XML rules | Chapter 4. Testing XML rules After you have created an XML rule, you should create a test rule to ensure that it works. 4.1. Creating a test rule Test rules are created using a process similar to the process for creating an XML rule, with the following differences: Test rules should be placed in a tests/ directory beneath the rule to be tested. Any data, such as test classes, should be placed in a data/ directory beneath the tests/ directory. Test rules should use the .windup.test.xml extension. These rules use the structure defined in the Test XML Rule Structure. In addition, it is recommended to create a test rule that follows the name of the rule it tests. For example, if a rule were created with a filename of proprietary-rule.mtr.xml , the test rule should be called proprietary-rule.windup.test.xml . 4.1.1. Test XML rule structure All test XML rules are defined as elements within ruletests which contain one or more rulesets . For more details, see the MTR XML rule schema . A ruletest is a group of one or more tests that targets a specific area of migration. This is the basic structure of the <ruletest> element. <ruletest id="<RULE_TOPIC>-test"> : Defines this as a unique MTR ruletest and gives it a unique ruletest id. <testDataPath> : Defines the path to any data, such as classes or files, used for testing. <sourceMode> : Indicates if the passed in data only contains source files. If an archive, such as an EAR, WAR, or JAR, is in use, then this should be set to false . Defaults to true . <rulePath> : The path to the rule to be tested. This should end in the name of the rule to test. <ruleset> : Rulesets containing the logic of the test cases. These are identical to the ones defined in Rulesets. 4.1.2. Test XML rule syntax In addition to the tags in the standard XML rule syntax, the following when conditions are commonly used for creating test rules: <not> <iterable-filter> <classification-exists> <hint-exists> In addition to the tags in the standard perform action syntax, the following perform conditions are commonly used as actions in test rules: <fail> 4.1.2.1. <not> syntax Summary The <not> element is the standard logical not operator, and is commonly used to perform a <fail> if the condition is not met. The following is an example of a test rule that fails if only a specific message exists at the end of the analysis. <ruletest xmlns="http://windup.jboss.org/schema/jboss-ruleset" id="proprietary-servlet-test" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd"> <testDataPath>data/</testDataPath> <rulePath>../proprietary-servlet.windup.xml</rulePath> <ruleset> <rules> <rule id="proprietary-servlet-01000-test"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<iterable-filter>` has a size of `1`. This rule will only match on a single instance of the defined hint. --> <iterable-filter size="1"> <hint-exists message="Replace the proprietary @ProprietaryServlet annotation with the Java EE 7 standard @WebServlet annotation*" /> </iterable-filter> </not> </when> <!-- This `<perform>` element is only executed if the `<when>` condition is false. This ensures that it only executes if there is not a single instance of the defined hint. --> <perform> <fail message="Hint for @ProprietaryServlet was not found!" /> </perform> </rule> </rules> </ruleset> </ruletest> The <not> element has no unique attributes or child elements. 4.1.2.2. <iterable-filter> syntax Summary The <iterable-filter> element counts the number of times a condition is verified. For additional information, see the IterableFilter class. The following is an example that looks for four instances of the specified message. <ruletest xmlns="http://windup.jboss.org/schema/jboss-ruleset" id="proprietary-servlet-test" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd"> <testDataPath>data/</testDataPath> <rulePath>../proprietary-servlet.mtr.xml</rulePath> <ruleset> <rules> <rule id="proprietary-servlet-03000-test"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<iterable-filter>` has a size of `4`. This rule will only match on four instances of the defined hint. --> <iterable-filter size="4"> <hint-exists message="Replace the proprietary @ProprietaryInitParam annotation with the Java EE 7 standard @WebInitParam annotation*" /> </iterable-filter> </not> </when> <!-- This `<perform>` element is only executed if the `<when>` condition is false. In this configuration, it only executes if there are not four instances of the defined hint. --> <perform> <fail message="Hint for @ProprietaryInitParam was not found!" /> </perform> </rule> </rules> </ruleset> </ruletest> The <iterable-filter> element has no unique child elements. <iterable-filter> element attributes Attribute Name Type Description size integer The number of times to be verified. 4.1.2.3. <classification-exists> syntax The <classification-exists> element determines if a specific classification title has been included in the analysis. For additional information, see the ClassificationExists class. Important When testing for a message that contains special characters, such as [ or ' , you must escape each special character with a backslash ( \ ) to correctly match. The following is an example that searches for a specific classification title. <ruletest xmlns="http://windup.jboss.org/schema/jboss-ruleset" id="proprietary-servlet-test" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd"> <testDataPath>data/</testDataPath> <rulePath>../weblogic.mtr.xml</rulePath> <ruleset> <rules> <rule id="weblogic-01000-test"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<classification-exists>` is attempting to match on the defined title. This classification would have been generated by a matching `<classification title="WebLogic scheduled job" .../>` rule. --> <classification-exists classification="WebLogic scheduled job" /> </not> </when> <!-- This `<perform>` element is only executed if the `<when>` condition is false. In this configuration, it only executes if there is not a matching classification. --> <perform> <fail message="Triggerable not found" /> </perform> </rule> </rules> </ruleset> </ruletest> The <classification-exists> has no unique child elements. <classification-exists> element attributes Attribute Name Type Description classification String The <classification> title to search for. in String An optional argument that restricts matching to files that contain the defined filename. 4.1.2.4. <hint-exists> syntax The <hint-exists> element determines if a specific hint has been included in the analysis. It searches for any instances of the defined message, and is typically used to search for the beginning or a specific class inside of a <message> element. For additional information, see the HintExists class. Important When testing for a message that contains special characters, such as [ or ' , you must escape each special character with a backslash ( \ ) to correctly match. The following is an example that searches for a specific hint. <ruletest xmlns="http://windup.jboss.org/schema/jboss-ruleset" id="proprietary-servlet-test" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd"> <testDataPath>data/</testDataPath> <rulePath>../weblogic.windup.xml</rulePath> <ruleset> <rules> <rule id="weblogic-eap7-05000-test"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<hint-exists>` is attempting to match on the defined message. This message would have been generated by a matching `<message>` element on the `<hint>` condition. --> <hint-exists message="Replace with the Java EE standard method .*javax\.transaction\.TransactionManager\.resume\(Transaction tx\).*" /> </not> </when> <!-- This `<perform>` element is only executed if the `<when>` condition is false. In this configuration, it only executes if there is not a matching hint. --> <perform> <fail message="Note to replace with standard TransactionManager.resume is missing!" /> </perform> </rule> </rules> </ruleset> </ruletest> The <hint-exists> element has no unique child elements. <hint-exists> element attributes Attribute Name Type Description message String The <hint> message to search for. in String An optional argument that restricts matching to InLineHintModels that reference the given filename. 4.1.2.5. <fail> syntax The <fail> element reports the execution as a failure and displays the associated message. It is commonly used in conjunction with the <not> condition to display a message only if the conditions are not met. The <fail> element has no unique child elements. <fail> element attributes Attribute Name Type Description message String The message to be displayed. 4.2. Manually testing an XML rule You can run an XML rule against your application file to test it: USD <MTR_HOME>/mta-cli [--sourceMode] --input <INPUT_ARCHIVE_OR_FOLDER> --output <OUTPUT_REPORT_DIRECTORY> --target <TARGET_TECHNOLOGY> --packages <PACKAGE_1> <PACKAGE_2> <PACKAGE_N> You should see the following result: More examples of how to run MTR are located in the Migration Toolkit for Runtimes CLI Guide . 4.3. Testing the rules by using JUnit Once a test rule has been created, it can be analyzed as part of a JUnit test to confirm that the rule meets all criteria for execution. The WindupRulesMultipleTests class in the MTR rules repository is designed to test multiple rules simultaneously, and provides feedback on any missing requirements. Prerequisites Fork and clone the MTR XML rules. The location of this repository will be referred to as <RULESETS_REPO>. Create a test XML rule. Creating the JUnit test configuration The following instructions detail creating a JUnit test using Eclipse. When using a different IDE, it is recommended to consult your IDE's documentation for instructions on creating a JUnit test. Import the MTR rulesets repository into your IDE. Copy the custom rules, along with the corresponding tests and data, into </path/to/RULESETS_REPO>/rules-reviewed/<RULE_NAME>/ . This should create the following directory structure. Directory structure Select Run from the top menu bar. Select Run Configurations... from the drop down that appears. Right-click JUnit from the options on the left side and select New . Enter the following: Name : A name for your JUnit test, such as WindupRulesMultipleTests . Project : Ensure this is set to windup-rulesets . Test class : Set this to org.jboss.windup.rules.tests.WindupRulesMultipleTests . Select the Arguments tab, and add the -DrunTestsMatching=<RULE_NAME> VM argument. For instance, if your rule name was community-rules , then you would add -DrunTestsMatching=community-rules as seen in the following image. Click Run in the bottom right corner to begin the test. When the execution completes, the results are available for analysis. If all tests passed, then the test rule is correctly formatted. If all tests did not pass, it is recommended to address each of the issues raised in the test failures. 4.4. About validation reports Validation reports provide details about test rules and failures and contain the following sections: Summary This section contains the total number of tests run and reports the number of errors and failures. It displays the total success rate and the time taken, in seconds, for the report to be generated. Package List This section contains the number of tests executed for each package and reports the number of errors and failures. It displays the success rate and the time taken, in seconds, for each package to be analyzed. A single package named org.jboss.windup.rules.tests is displayed unless additional test cases have been defined. Test Cases This section describes the test cases. Each failure includes a Details section that can be expanded to show the stack trace for the assertion, including a human-readable line indicating the source of the error. 4.4.1. Creating a validation report You can create a validation report for your custom rules. Prerequisites You must fork and clone the MTR XML rules. You must have one or more test XML rules to validate. Procedure Navigate to the local windup-rulesets repository. Create a directory for your custom rules and tests: windup-rulesets/rules-reviewed/myTests . Copy your custom rules and tests to the windup-rulesets/rules-reviewed/<myTests> directory. Run the following command from the root directory of the windup-rulesets repository: 1 Specify the directory containing your custom rules and tests. If you omit the -DrunTestsMatching argument, the validation report will include all the tests and take much longer to generate. 2 Specify your report name. The validation report is created in the windup-rulesets/target/site/ repository. 4.4.2. Validation report error messages Validation reports contain errors encountered while running the rules and tests. The following table contains error messages and how to resolve the errors. Table 4.1. Validation report error messages Error message Description Resolution No test file matching rule This error occurs when a rule file exists without a corresponding test file. Create a test file for the existing rule. Test rule Ids <RULE_NAME> not found! This error is thrown when a rule exists without a corresponding ruletest. Create a test for the existing rule. XML parse fail on file <FILE_NAME> The syntax in the XML file is invalid, and unable to be parsed successfully by the rule validator. Correct the invalid syntax. Test file path from <testDataPath> tag has not been found. Expected path to test file is: <RULE_DATA_PATH> No files are found in the path defined in the <testDataPath> tag within the test rule. Create the path defined in the <testDataPath> tag, and ensure all necessary data files are located within this directory. The rule with id="<RULE_ID>" has not been executed. The rule with the provided id has not been executed during this validation. Ensure that a test data file exists that matches the conditions defined in the specified rule. | [
"<ruletest xmlns=\"http://windup.jboss.org/schema/jboss-ruleset\" id=\"proprietary-servlet-test\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd\"> <testDataPath>data/</testDataPath> <rulePath>../proprietary-servlet.windup.xml</rulePath> <ruleset> <rules> <rule id=\"proprietary-servlet-01000-test\"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<iterable-filter>` has a size of `1`. This rule will only match on a single instance of the defined hint. --> <iterable-filter size=\"1\"> <hint-exists message=\"Replace the proprietary @ProprietaryServlet annotation with the Java EE 7 standard @WebServlet annotation*\" /> </iterable-filter> </not> </when> <!-- This `<perform>` element is only executed if the previous `<when>` condition is false. This ensures that it only executes if there is not a single instance of the defined hint. --> <perform> <fail message=\"Hint for @ProprietaryServlet was not found!\" /> </perform> </rule> </rules> </ruleset> </ruletest>",
"<ruletest xmlns=\"http://windup.jboss.org/schema/jboss-ruleset\" id=\"proprietary-servlet-test\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd\"> <testDataPath>data/</testDataPath> <rulePath>../proprietary-servlet.mtr.xml</rulePath> <ruleset> <rules> <rule id=\"proprietary-servlet-03000-test\"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<iterable-filter>` has a size of `4`. This rule will only match on four instances of the defined hint. --> <iterable-filter size=\"4\"> <hint-exists message=\"Replace the proprietary @ProprietaryInitParam annotation with the Java EE 7 standard @WebInitParam annotation*\" /> </iterable-filter> </not> </when> <!-- This `<perform>` element is only executed if the previous `<when>` condition is false. In this configuration, it only executes if there are not four instances of the defined hint. --> <perform> <fail message=\"Hint for @ProprietaryInitParam was not found!\" /> </perform> </rule> </rules> </ruleset> </ruletest>",
"<ruletest xmlns=\"http://windup.jboss.org/schema/jboss-ruleset\" id=\"proprietary-servlet-test\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd\"> <testDataPath>data/</testDataPath> <rulePath>../weblogic.mtr.xml</rulePath> <ruleset> <rules> <rule id=\"weblogic-01000-test\"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<classification-exists>` is attempting to match on the defined title. This classification would have been generated by a matching `<classification title=\"WebLogic scheduled job\" .../>` rule. --> <classification-exists classification=\"WebLogic scheduled job\" /> </not> </when> <!-- This `<perform>` element is only executed if the previous `<when>` condition is false. In this configuration, it only executes if there is not a matching classification. --> <perform> <fail message=\"Triggerable not found\" /> </perform> </rule> </rules> </ruleset> </ruletest>",
"<ruletest xmlns=\"http://windup.jboss.org/schema/jboss-ruleset\" id=\"proprietary-servlet-test\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://windup.jboss.org/schema/jboss-ruleset http://windup.jboss.org/schema/jboss-ruleset/windup-jboss-ruleset.xsd\"> <testDataPath>data/</testDataPath> <rulePath>../weblogic.windup.xml</rulePath> <ruleset> <rules> <rule id=\"weblogic-eap7-05000-test\"> <when> <!-- The `<not>` will perform a logical _not_ operator on the elements within. --> <not> <!-- The defined `<hint-exists>` is attempting to match on the defined message. This message would have been generated by a matching `<message>` element on the `<hint>` condition. --> <hint-exists message=\"Replace with the Java EE standard method .*javax\\.transaction\\.TransactionManager\\.resume\\(Transaction tx\\).*\" /> </not> </when> <!-- This `<perform>` element is only executed if the previous `<when>` condition is false. In this configuration, it only executes if there is not a matching hint. --> <perform> <fail message=\"Note to replace with standard TransactionManager.resume is missing!\" /> </perform> </rule> </rules> </ruleset> </ruletest>",
"<MTR_HOME>/mta-cli [--sourceMode] --input <INPUT_ARCHIVE_OR_FOLDER> --output <OUTPUT_REPORT_DIRECTORY> --target <TARGET_TECHNOLOGY> --packages <PACKAGE_1> <PACKAGE_2> <PACKAGE_N>",
"Report created: <OUTPUT_REPORT_DIRECTORY>/index.html Access it at this URL: file:///<OUTPUT_REPORT_DIRECTORY>/index.html",
"├── *rules-reviewed/* _(Root directory of the rules found within the project)_ │ ├── *<RULE_NAME>/* _(Directory to contain the newly developed rule and tests)_ │ │ ├── *<RULE_NAME>.windup.xml* _(Custom rule)_ │ │ ├── *tests/* _(Directory that contains any test rules and data)_ │ │ │ ├── *<RULE_NAME>.windup.test.xml* _(Test rule)_ │ │ │ └── *data/* _(Optional directory to contain test rule data)_",
"mvn -Dtest=WindupRulesMultipleTests -DrunTestsMatching=<myTests> clean <myReport>:report 1 2"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/rules_development_guide/testing-rules_rules-development-guide-mtr |
Chapter 1. Device Mapper Multipathing | Chapter 1. Device Mapper Multipathing Device mapper multipathing (DM-Multipath) allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths. This chapter provides a summary of the features of DM-Multipath that are new for the initial release of Red Hat Enterprise Linux 6. Following that, this chapter provides a high-level overview of DM Multipath and its components, as well as an overview of DM-Multipath setup. 1.1. New and Changed Features This section lists new and changed features of DM-Multipath that are included with the initial and subsequent releases of Red Hat Enterprise Linux 6. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 6.0 Red Hat Enterprise Linux 6.0 includes the following documentation and feature updates and changes. For the Red Hat Enterprise Linux 6 release, the initial DM-Multipath setup procedure for a basic failover configuration has changed. You can now create the DM-Multipath configuration file and enable DM-Multipath with the mpathconf configuration utility, which can also load the device-mapper-multipath module, start the multipathd daemon, and set chkconfig to start the daemon automatically on reboot. For information on the new setup procedure, see Section 3.1, "Setting Up DM-Multipath" . For more information on the mpathconf command, see the mpathconf (5) man page. The Red Hat Enterprise Linux 6 release provides a new mode for setting up multipath devices, which you set with the find_multipaths configuration file parameter. In releases of Red Hat Enterprise Linux, multipath always tried to create a multipath device for every path that was not explicitly blacklisted. In Red Hat Enterprise Linux 6, however, if the find_multipaths configuration parameter is set to yes , then multipath will create a device only if one of three conditions are met: There are at least two non-blacklisted paths with the same WWID. The user manually forces the device creation, by specifying a device with the multipath command. A path has the same WWID as a multipath device that was previously created (even if that multipath device does not currently exist). For instructions on the procedure to follow if you have previously created multipath devices when the find_multipaths parameter was not set, see Section 4.2, "Configuration File Blacklist" . This feature should allow most users to have multipath automatically choose the correct paths to make into multipath devices, without having to edit the blacklist. For information on the find_multipaths configuration parameter, see Section 4.3, "Configuration File Defaults" . The Red Hat Enterprise Linux 6 release provides two new path selector algorithms which determine which path to use for the I/O operation: queue-length and service-time . The queue-length algorithm looks at the amount of outstanding I/O to the paths to determine which path to use . The service-time algorithm looks at the amount of outstanding I/O and the relative throughput of the paths to determine which path to use . For more information on the path selector parameters in the configuration file, see Chapter 4, The DM-Multipath Configuration File . In the Red Hat Enterprise Linux 6 release, priority functions are no longer callout programs. Instead they are dynamic shared objects like the path checker functions. The prio_callout parameter has been replaced by the prio parameter. For descriptions of the supported prio functions, see Chapter 4, The DM-Multipath Configuration File . In Red Hat Enterprise Linux 6, the multipath command output has changed format. For information on the multipath command output, see Section 5.7, "Multipath Command Output" . In the Red Hat Enterprise Linux 6 release, the location of the multipath bindings file is /etc/multipath/bindings . The Red Hat Enterprise Linux 6 release provides three new defaults parameters in the multipath.conf file: checker_timeout , fast_io_fail_tmo , and dev_loss_tmo . For information on these parameters, see Chapter 4, The DM-Multipath Configuration File . When the user_friendly_names option in the multipath configuration file is set to yes , the name of a multipath device is of the form mpath n . For the Red Hat Enterprise Linux 6 release, n is an alphabetic character, so that the name of a multipath device might be mpatha or mpathb . In releases, n was an integer. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 6.1 Red Hat Enterprise Linux 6.1 includes the following documentation and feature updates and changes. This document now contains a new chapter, Section 5.3, "Moving root File Systems from a Single Path Device to a Multipath Device" . This document now contains a new chapter, Section 5.4, "Moving swap File Systems from a Single Path Device to a Multipath Device" . 1.1.3. New and Changed Features for Red Hat Enterprise Linux 6.2 Red Hat Enterprise Linux 6.2 includes the following documentation and feature updates and changes. The Red Hat Enterprise Linux 6.2 release provides a new multipath.conf parameter, rr_min_io_rq , in the defaults , devices , and multipaths sections of the multipath.conf file. The rr_min_io parameter no longer has an effect in Red Hat Enterprise Linux 6.2. For information on the rr_min_io_rq parameter, see Chapter 4, The DM-Multipath Configuration File . The dev_loss_tmo configuration file parameter can now be set to infinity, which sets the actual sysfs variable to 2147483647 seconds, or 68 years. For information on this parameter, see Chapter 4, The DM-Multipath Configuration File . The procedure described in Section 5.3, "Moving root File Systems from a Single Path Device to a Multipath Device" has been updated. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 6.3 Red Hat Enterprise Linux 6.3 includes the following documentation and feature updates and changes. The default value of the queue_without_daemon configuration file parameter is now set to no by default. The default value of the max_fds configuration file parameter is now set to max by default. The user_friendly_names configuration file parameter is now configurable in the defaults , multipaths , and devices sections of the multipath.conf configuration file. The defaults section of the multipath.conf configuration file supports a new hwtable_regex_match parameter. For information on the configuration file parameters, see Chapter 4, The DM-Multipath Configuration File . 1.1.5. New and Changed Features for Red Hat Enterprise Linux 6.4 Red Hat Enterprise Linux 6.4 includes the following documentation and feature updates and changes. The defaults section and the devices section of the multipath.conf configuration file support a new retain_attached_hardware_handler parameter and a new detect_prio parameter. For information on the configuration file parameters, see Chapter 4, The DM-Multipath Configuration File . This document contains a new section, Section 3.4, "Setting Up Multipathing in the initramfs File System" . 1.1.6. New and Changed Features for Red Hat Enterprise Linux 6.5 Red Hat Enterprise Linux 6.5 includes the following documentation and feature updates and changes. The defaults section of the multipath.conf configuration file supports a new replace_wwid_whitespace and a new reload_readwrite parameter. The defaults section of the multipath.conf file is documented in Table 4.1, "Multipath Configuration Defaults" . 1.1.7. New and Changed Features for Red Hat Enterprise Linux 6.6 Red Hat Enterprise Linux 6.6 includes the following documentation and feature updates and changes. The defaults section of the multipath.conf configuration file supports a new force_sync parameter. The defaults section of the multipath.conf file is documented in Table 4.1, "Multipath Configuration Defaults" . The multipath supports a -w and a -W , as described in Table 4.1, "Multipath Configuration Defaults" . 1.1.8. New and Changed Features for Red Hat Enterprise Linux 6.7 Red Hat Enterprise Linux 6.7 includes the following documentation and feature updates and changes. This document includes a new section, Section 5.1, "Automatic Configuration File Generation with Multipath Helper" . The Multipath Helper application gives you options to create multipath configurations with custom aliases, device blacklists, and settings for the characteristics of individual multipath devices. The defaults section of the multipath.conf configuration file supports a new config_dir parameter. The defaults section of the multipath.conf file is documented in Table 4.1, "Multipath Configuration Defaults" . The defaults , devices , and multipaths sections of the multipath.conf configuration file now support the delay_watch_checks and delay_wait_checks configuration parameters. For information on the configuration parameters, see Chapter 4, The DM-Multipath Configuration File . 1.1.9. New and Changed Features for Red Hat Enterprise Linux 6.8 Red Hat Enterprise Linux 6.8 includes the following documentation and feature updates and changes. The prio configuration parameter now supports the prio "alua exclusive_pref_bit" setting, which will cause multipath to create a path group that contains only the path with the pref bit set and will give that path group the highest priority. For information on the configuration parameters, see Chapter 4, The DM-Multipath Configuration File . As of Red Hat Enterprise Linux release 6.8, the multipathd command supports new format commands that show the status of multipath devices and paths in "raw" format versions. For information on the multipathd command, see Section 5.12, "The multipathd Interactive Console and the multipathd Command" . 1.1.10. New and Changed Features for Red Hat Enterprise Linux 6.9 Red Hat Enterprise Linux 6.9 includes the following documentation and feature updates and changes. The defaults , devices , and multipaths sections of the multipath.conf configuration file now support the skip_kpartx and max_sectors_kb configuration parameters. For information on the configuration parameters, see Chapter 4, The DM-Multipath Configuration File . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/MPIO_Overview |
Appendix A. VDSM Service and Hooks | Appendix A. VDSM Service and Hooks The VDSM service is used by the Red Hat Virtualization Manager to manage Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. VDSM manages and monitors the host's storage, memory, and network resources. It also coordinates virtual machine creation, statistics gathering, log collection and other host administration tasks. VDSM is run as a daemon on each host managed by Red Hat Virtualization Manager. It answers XML-RPC calls from clients. The Red Hat Virtualization Manager functions as a VDSM client. VDSM is extensible via hooks. Hooks are scripts executed on the host when key events occur. When a supported event occurs VDSM runs any executable hook scripts in /usr/libexec/vdsm/hooks/ nn_event-name / on the host in alphanumeric order. By convention each hook script is assigned a two digit number, included at the front of the file name, to ensure that the order in which the scripts will be run in is clear. You are able to create hook scripts in any programming language, Python will however be used for the examples contained in this chapter. Note that all scripts defined on the host for the event are executed. If you require that a given hook is only executed for a subset of the virtual machines which run on the host then you must ensure that the hook script itself handles this requirement by evaluating the Custom Properties associated with the virtual machine. Warning VDSM hooks can interfere with the operation of Red Hat Virtualization. A bug in a VDSM hook has the potential to cause virtual machine crashes and loss of data. VDSM hooks should be implemented with caution and tested rigorously. The Hooks API is new and subject to significant change in the future. You can extend VDSM with event-driven hooks. Extending VDSM with hooks is an experimental technology, and this chapter is intended for experienced developers. By setting custom properties on virtual machines it is possible to pass additional parameters, specific to a given virtual machine, to the hook scripts. A.1. Installing a VDSM hook By default, VDSM hooks are not installed. If you need a specific hook, you must install it manually. Prerequisites The host repository must be enabled. You are logged into the host with root permissions. Procedure Get a list of available hooks: Put the host in maintenance mode. Install the desired VDSM hook package on the host: For example, to install the vdsm-hook-vhostmd package on the host, enter the following: Restart the host. Additional resources Enabling the Red Hat Virtualization Host Repository Enabling the Red Hat Enterprise Linux host Repositories | [
"dnf list vdsm\\*hook\\*",
"dnf install <vdsm-hook-name>",
"dnf install vdsm-hook-vhostmd"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/appe-VDSM_and_Hooks |
8.168. pciutils | 8.168. pciutils 8.168.1. RHBA-2014:1006 - pciutils bug fix update Updated pciutils packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The pciutils packages provide various utilities for inspecting and manipulating devices connected to the PCI bus. Bug Fixes BZ# 1032827 Prior to this update, the lspci command did not correctly handle empty PCI slots. As a consequence, lspci printed a warning message when it was used on a system with one or more unused PCI slots. Following this update, lspci disregards empty PCI slots and the described problem no longer occurs. BZ# 998626 Previously, the source link for PCI IDs in the /usr/sbin/update-pciids file of the pciutils package was deprecated, and consequently invoked an outdated list of PCI devices. With this update, /usr/sbin/update-pciids has been amended and now links to the up-to-date PCI ID list. Users of pciutils are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/pciutils |
Chapter 7. Publishing certificates and CRLs | Chapter 7. Publishing certificates and CRLs Red Hat Certificate System includes a customizable publishing framework for the Certificate Manager, enabling certificate authorities to publish certificates, certificate revocation lists (CRLs), and other certificate-related objects to any of the supported repositories: an LDAP-compliant directory, a flat file, and an online validation authority. This chapter explains how to configure a Certificate Manager to publish certificates and CRLs to a file, to a directory, and to the Online Certificate Status Manager. Note Features in this section on TMS are not tested in the evaluation. This section is for reference only. The general process to configure publishing is as follows: Configure publishing to a file, LDAP directory, or OCSP responder. There can be a single publisher or multiple publishers, depending on how many locations will be used. The locations can be split by certificates and CRLs or narrower definitions, such as certificate type. Rules determine which type to publish and to what location by being associated with the publisher. Set rules to determine what certificates are published to the locations. Any rule which a certificate or CRL matches is activated, so the same certificate can be published to a file and to an LDAP directory by matching a file-based rule and matching a directory-based rule. Rules can be set for each object type: CA certificates, CRLs, user certificates, and cross-pair certificates. Disable all rules that will not be used. Configure CRLs. CRLs must be configured before they can be published. See Chapter 6, Revoking certificates and issuing CRLs . Enable publishing after setting up publishers, mappers, and rules. Once publishing is enabled, the server starts publishing immediately. If the publishers, mappers, and rules are not completely configured, publishing may not work correctly or at all. 7.1. About publishing The Certificate System is capable of publishing certificates to a file or an LDAP directory and of publishing CRLs to a file, an LDAP directory, or to an OCSP responder. For additional flexibility, specific types of certificates or CRLs can be published to a single format or all three. For example, CA certificates can be published only to a directory and not to a file, and user certificates can be published to both a file and a directory. NOTE An OCSP responder only provides information about CRLs; certificates are not published to an OCSP responder. Different publishing locations can be set for certificates files and CRL files, as well as different publishing locations for different types of certificates files or different types of CRL files. Similarly, different types of certificates and different types of CRLs can be published to different places in a directory. For example, certificates for users from the West Coast division of a company can be published in one branch of the directory, while certificates for users in the East Coast division can be published to another branch in the directory. When publishing is enabled, every time a certificate or a CRL is issued, updated, or revoked, the publishing system is invoked. The certificate or CRL is evaluated by the rules to see if it matches the type and predicate set in the rule. The type specifies if the object is a CRL, CA certificate, or any other certificate. The predicate sets more criteria for the type of object being evaluated. For example, it can specify user certificates, or it can specify West Coast user certificates. To use predicates, a value needs to be entered in the predicate field of the publishing rule, and a corresponding value (although formatted somewhat differently) needs to be contained in the certificate or certificate request to match. The value in the certificate or certificate request may be derived from information in the certificate, such as the type of certificate, or may be derived from a hidden value that is placed in the request form. If no predicate is set, all certificates of that type are considered to match. For example, all CRLs match the rule if CRL is set as the type. Every rule that is matched publishes the certificate or CRL according to the method and location specified in that rule. A given certificate or CRL can match no rules, one rule, more than one rule, or all rules. The publishing system attempts to match every certificate and CRL issued against all rules. When a rule is matched, the certificate or CRL is published according to the method and location specified in the publisher associated with that rule. For example, if a rule matches all certificates issued to users, and the rule has a publisher that publishes to a file in the location /etc/CS/certificates , the certificate is published as a file to that location. If another rule matches all certificates issued to users, and the rule has a publisher that publishes to the LDAP attribute userCertificate;binary attribute, the certificate is published to the directory specified when LDAP publishing was enabled in this attribute in the user's entry. For rules that specify to publish to a file, a new file is created when either a certificate or a CRL is issued in the stipulated directory. For rules that specify to publish to an LDAP directory, the certificate or CRL is published to the entry specified in the directory, in the attribute specified. The CA overwrites the values for any published certificate or CRL attribute with any subsequent certificate or CRL. Simply put, any existing certificate or CRL that is already published is replaced by the certificate or CRL. For rules that specify to publish to an Online Certificate Status Manager, a CRL is published to this manager. Certificates are not published to an Online Certificate Status Manager. For LDAP publishing, the location of the user's entry needs to be determined. Mappers are used to determine the entry to which to publish. The mappers can contain an exact DN for the entry, some variable that associates information that can be gotten from the certificate to create the DN, or enough information to search the directory for a unique attribute or set of attributes in the entry to ascertain the correct DN for the entry. When a certificate is revoked, the server uses the publishing rules to locate and delete the corresponding certificate from the LDAP directory or from the filesystem. When a certificate expires, the server can remove that certificate from the configured directory. The server does not do this automatically; the server must be configured to run the appropriate job. Setting up publishing involves configuring publishers, mappers, and rules. 7.1.1. Publishers Publishers specify the location to which certificates and CRLs are published. When publishing to a file, publishers specify the filesystem publishing directory. When publishing to an LDAP directory, publishers specify the attribute in the directory that stores the certificate or CRL; a mapper is used to determine the DN of the entry. For every DN, a different formula is set for deriving that DN. The location of the LDAP directory is specified when LDAP publishing is enabled. When publishing a CRL to an OCSP responder, publishers specify the hostname and URI of the Online Certificate Status Manager. 7.1.2. Mappers Mappers are only used in LDAP publishing. Mappers construct the DN for an entry based on information from the certificate or the certificate request. The server has information from the subject name of the certificate and the certificate request and needs to know how to use this information to create a DN for that entry. The mapper provides a formula for converting the information available either to a DN or to some unique information that can be searched in the directory to obtain a DN for the entry. 7.1.3. Rules Rules for file, LDAP, and OCSP publishing tell the server whether and how a certificate or CRL is to be published. A rule first defines what is to be published, a certificate or CRL matching certain characteristics, by setting a type and predicate for the rule. A rule then specifies the publishing method and location by being associated with a publisher and, for LDAP publishing, with a mapper. Rules can be as simple or complex as necessary for the PKI deployment and are flexible enough to accommodate different scenarios. 7.1.4. Publishing to Files The server can publish certificates and CRLs to flat files, which can then be imported into any repository, such as a relational database. When the server is configured to publish certificates and CRLs to file, the published files are DER-encoded binary blobs, base-64 encoded text blobs, or both. For each certificate the server issues, it creates a file that contains the certificate in either DER-encoded or base-64 encoded format. Each file is named either cert- serial_number .der or cert- serial_number .b64 . The serial_number is the serial number of the certificate contained in the file. For example, the filename for a DER-encoded certificate with the serial number 1234 is cert-1234.der . Every time the server generates a CRL, it creates a file that contains the new CRL in either DER-encoded or base-64 encoded format. Each file is named either issuing_point_name-this_update .der or issuing_point_name-this_update .b64 , depending on the format. The issuing_point_name identifies the CRL issuing point which published the CRL, and this_update specifies the value derived from the time-dependent update value for the CRL contained in the file. For example, the filename for a DER-encoded CRL with the value This Update: Friday January 28 15:36:00 PST 2023 , is MasterCRL-20230128-153600.der . 7.1.5. OCSP Publishing There are two forms of Certificate System OCSP services, an internal service for the Certificate Manager and the Online Certificate Status Manager. The internal service checks the internal database of the Certificate Manager to report on the status of a certificate. The internal service is not set for publishing; it uses the certificates stored in its internal database to determine the status of a certificate. The Online Certificate Status Manager checks CRLs sent to it by Certificate Manager. A publisher is set for each location a CRL is sent and one rule for each type of CRL sent. For detailed information on both OCSP services, see Section 6.6, "Using the Online Certificate Status Protocol (OCSP) responder" . 7.1.6. LDAP Publishing In LDAP publishing , the server publishes the certificates, CRLs, and other certificate-related objects to a directory using LDAP or LDAPS. The branch of the directory to which it publishes is called the publishing directory . For each certificate the server issues, it creates a blob that contains the certificate in its DER-encoded format in the specified attribute of the user's entry. The certificate is published as a DER encoded binary blob. Every time the server generates a CRL, it creates a blob that contains the new CRL in its DER-encoded format in the specified attribute of the entry for the CA. The server can publish certificates and CRLs to an LDAP-compliant directory using the LDAP protocol or LDAP over SSL (LDAPS) protocol, and applications can retrieve the certificates and CRLs over HTTP. Support for retrieving certificates and CRLs over HTTP enables some browsers to import the latest CRL automatically from the directory that receives regular updates from the server. The browser can then use the CRL to check all certificates automatically to ensure that they have not been revoked. For LDAP publishing to work, the user entry must be present in the LDAP directory. If the server and publishing directory become out of sync for some reason, privileged users (administrators and agents) can also manually initiate the publishing process. For instructions, see Section 7.11.2, "Manually updating the crl in the directory" . 7.2. Configuring publishing to a file The general process to configure publishing involves setting up a publisher to publish the certificates or CRLs to the specific location. There can be a single publisher or multiple publishers, depending on how many locations will be used. The locations can be split by certificates and CRLs or finer definitions, such as certificate type. Rules determine which type to publish and to what location by being associated with the publisher. Publishing to file simply publishes the CRLs or certificates to text files on a given host. Publishers must be created and configured for each publishing location; publishers are not automatically created for publishing to a file. To publish all files to a single location, create one publisher. To publish to different locations, create a publisher for each location. A location can either contain an object type, like user certificates, or a subset of an object type, like West Coast user certificates. To create publishers for publishing to files: Get the URL and port that apply to your instance: Log into the Certificate Manager Console. For example: Note For more information on using pkiconsole , run pkiconsole --help . pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing , and then Publishers . The Publishers Management tab, which lists configured publisher instances, opens on the right. Click Add to open the Select Publisher Plug-in Implementation window, which lists registered publisher modules. Select the FileBasedPublisher module, then open the editor window. This is the module that enables the Certificate Manager to publish certificates and CRLs to files. Configure the information for publishing the certificate: The publisher ID, an alphanumeric string with no spaces like PublishCertsToFile The path to the directory in which the Certificate Manager should publish the files. The path can be an absolute path or can be relative to the Certificate System instance directory. For example, /export/CS/certificates . The file type to publish, by selecting the checkboxes for DER-encoded files, base-64 encoded files, or both. For CRLs, the format of the timestamp. Published certificates include serial numbers in their file names, while CRLs use timestamps. For CRLs, whether to generate a link in the file to go to the latest CRL. If enabled, the link assumes that the name of the CRL issuing point to use with the extension will be supplied in the crlLinkExt field. For CRLs, whether to compress (zip) CRLs and the compression level to use. After configuring the publisher, configure the rules for the published certificates and CRLs, as described in Section 7.5, "Creating rules" . 7.3. Configuring publishing to an OCSP The general process to configure publishing involves setting up a publisher to publish the certificates or CRLs to the specific location. There can be a single publisher or multiple publishers, depending on how many locations will be used. The locations can be split by certificates and CRLs or finer definitions, such as certificate type. Rules determine which type to publish and to what location by being associated with the publisher. Publishing to an OCSP Manager is a way to publish CRLs to a specific location for client verification. A publisher must be created and configured for each publishing location; publishers are not automatically created for publishing to the OCSP responder. Create a single publisher to publish everything to s single location, or create a publisher for every location to which CRLs will be published. Each location can contain a different kind of CRL. Enabling publishing to an OCSP with client authentication direct CA->OCSP CRL publishing is set up automatically when an OCSP instance is created. If you have set up an OCSP instance, it is likely you do not need any further action. Log into the Certificate Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing , and then Publishers . Click Add to open the Select Publisher Plug-in Implementation window, which lists registered publisher modules. Select the OCSPPublisher module, then open the editor window. This is the publisher module that enables the Certificate Manager to publish CRLs to the Online Certificate Status Manager. The publisher ID must be an alphanumeric string with no spaces, like PublishCertsToOCSP . The host can be the fully-qualified domain name, such as ocspResponder.example.com , or an IPv4 or IPv6 address. The value of the port is the port number on which the OCSP server is running. The default path is the directory to send the CRL to, like /ocsp/agent/ocsp/addCRL . If client authentication is used ( enableClientAuth is checked), then the nickname field gives the nickname of the certificate to use for authentication. This certificate must already exist in the OCSP security database; this will usually be the CA subsystem certificate. Create a user entry for the CA on the OCSP Manager. The user is used to authenticate to the OCSP when sending a new CRL. There are two things required: Name the OCSP user entry after the CA server, like CA- hostname-EEport . Use whatever certificate was specified in the publisher configuration as the user certificate in the OCSP user account. This is usually the CA's subsystem certificate. Setting up subsystem users is covered in Section 11.3.2.1, "Creating role users" . After configuring the publisher, configure the rules for the published certificates and CRLs, as described in Section 7.5, "Creating rules" . 7.4. Configuring publishing to an LDAP directory The general process to configure publishing involves setting up a publisher to publish the certificates or CRLs to the specific location. There can be a single publisher or multiple publishers, depending on how many locations will be used. The locations can be split by certificates and CRLs or finer definitions, such as certificate type. Rules determine which type to publish and to what location by being associated with the publisher. Configuring LDAP publishing is similar to other publishing procedures, with additional steps to configure the directory: Configure the Directory Server to which certificates will be published. Certain attributes have to be added to entries and bind identities and authentication methods have to be configured. Configure a publisher for each type of object published: CA certificates, cross-pair certificates, CRLs, and user certificates. The publisher declares in which attribute to store the object. The attributes set by default are the X.500 standard attributes for storing each object type. This attribute can be changed in the publisher, but generally, it is not necessary to change the LDAP publishers. Set up mappers to enable an entry's DN to be derived from the certificate's subject name. This generally does not need set for CA certificates, CRLs, and user certificates. There can be more than one mapper set for a type of certificate. This can be useful, for example, to publish certificates for two sets of users from different divisions of a company who are located in different parts of the directory tree. A mapper is created for each of the groups to specify a different branch of the tree. For details about setting up mappers, see Section 7.4.3, "Creating mappers" . Create rules to connect publishers to mappers, as described in Section 7.5, "Creating rules" . Enable publishing, as described in Section 7.6, "Enabling publishing" . Note CRL publishing by way of CA->LDAP and then OCSP<-LDAP is the Common-Criteria -evaluated method for CRL publishing. Please see 7.4.7 "Configuration for CRL publishing" in the Planning, Installation and Deployment Guide (Common Criteria Edition) for an example setup. 7.4.1. Configuring the LDAP directory Before certificates and CRLs can be published, the Directory Server must be configured to work with the publishing system. This means that user entries must have attributes that allow them to receive certificate information, and entries must be created to represent the CRLs. Set up the entry for the CA. For the Certificate Manager to publish its CA certificate and CRL, the directory must include an entry for the CA. TIP When LDAP publishing is configured, the Certificate Manager automatically creates or converts an entry for the CA in the directory. This option is set in both the CA and CRL mapper instances and enabled by default. If the directory restricts the Certificate Manager from creating entries in the directory, turn off this option in those mapper instances, and add an entry for the CA manually in the directory. When adding the CA's entry to the directory, select the entry type based on the DN of the CA: If the CA's DN begins with the cn component, create a new person entry for the CA. Selecting a different type of entry may not allow the cn component to be specified. If the CA's DN begins with the ou component, create a new organizationalunit entry for the CA. The entry does not have to be in the pkiCA or certificationAuthority object class. The Certificate Manager will convert this entry to the pkiCA or certificationAuthority object class automatically by publishing its CA's signing certificate. NOTE The pkiCA object class is defined in RFC 4523, while the certificationAuthority object class is defined in the (obsolete) RFC 2256. Either object class is acceptable, depending on the schema definitions used by the Directory Server. In some situations, both object classes can be used for the same CA entry. For more information on creating directory entries, see the Red Hat Directory Server documentation. Add the correct schema elements to the CA and user directory entries. For a Certificate Manager to publish certificates and CRLs to a directory, it must be configured with specific attributes and object classes. Object Type Schema Reason End-entity certificate userCertificate;binary (attribute) This is the attribute to which the Certificate Manager publishes the certificate. This is a multi-valued attribute, and each value is a DER-encoded binary X.509 certificate. The LDAP object class named inetOrgPerson allows this attribute. The strongAuthenticationUser object class allows this attribute and can be combined with any other object class to allow certificates to be published to directory entries with other object classes. The Certificate Manager does not automatically add this object class to the schema table of the corresponding Directory Server. If the directory object that it finds does not allow the userCertificate;binary attribute, adding or removing the certificate fails. CA certificate caCertificate;binary (attribute) This is the attribute to which the Certificate Manager publishes the certificate. The Certificate Manager publishes its own CA certificate to its own LDAP directory entry when the server starts. The entry corresponds to the Certificate Manager's issuer name. This is a required attribute of the pkiCA or certificationAuthority object class. The Certificate Manager adds this object class to the directory entry for the CA if it can find the CA's directory entry. CRL certificateRevocationList;binary (attribute) This is the attribute to which the Certificate Manager publishes the CRL. The Certificate Manager publishes the CRL to its own LDAP directory entry. The entry corresponds to the Certificate Manager's issuer name. This is an attribute of the pkiCA or certificationAuthority object class. The value of the attribute is the DER-encoded binary X.509 CRL. The CA's entry must already contain the pkiCA or certificationAuthority object class for the CRL to be published to the entry. Delta CRL deltaRevocationList;binary (attribute) This is the attribute to which the Certificate Manager publishes the delta CRL. The Certificate Manager publishes the delta CRL to its own LDAP directory entry, separate from the full CRL. The delta CRL entry corresponds to the Certificate Manager's issuer name. This attribute belongs to the deltaCRL or certificationAuthority-V2 object class. The value of the attribute is the DER-encoded binary X.509 delta CRL. Set up a bind DN for the Certificate Manager to use to access the Directory Server. The Certificate Manager user must have read-write permissions to the directory to publish certificates and CRLs to the directory so that the Certificate Manager can modify the user entries with certificate-related information and the CA entry with CA's certificate and CRL related information. The bind DN entry can be either of the following: An existing DN that has write access, such as the Directory Manager. A new user which is granted write access. The entry can be identified by the Certificate Manager's DN, such as cn=testCA, ou=Research Dept, o=Example Corporation, st=California, c=US . NOTE Carefully consider what privileges are given to this user. This user can be restricted in what it can write to the directory by creating ACLs for the account. For instructions on giving write access to the Certificate Manager's entry, see the Directory Server documentation. Set the directory authentication method for how the Certificate Manager authenticates to Directory Server. There are three options: basic authentication (simple username and password); SSL without client authentication (simple username and password); and SSL with client authentication (certificate-based). See the Red Hat Directory Server documentation for instructions on setting up these methods of communication with the server. 7.4.2. Configuring LDAP publishers The Certificate Manager creates, configures, and enables a set of publishers that are associated with LDAP publishing. The default publishers (for CA certificates, user certificates, CRLs, and cross-pair certificates) already conform to the X.500 standard attributes for storing certificates and CRLs and do not need to be changed. Table 7.1. LDAP publishers Publisher Description LdapCaCertPublisher Publishes CA certificates to the LDAP directory. LdapCrlPublisher Publishes CRLs to the LDAP directory. LdapDeltaCrlPublisher Publishes delta CRLs to the LDAP directory. LdapUserCertPublisher Publishes all types of end-entity certificates to the LDAP directory. LdapCrossCertPairPublisher Publishes cross-signed certificates to the LDAP directory. 7.4.3. Creating mappers Mappers are only used with LDAP publishing. Mappers define a relationship between a certificate's subject name and the DN of the directory entry to which the certificate is published. The Certificate Manager needs to derive the DN of the entry from the certificate or the certificate request so it can determine which entry to use. The mapper defines the relationship between the DN for the user entry and the subject name of the certificate or other input information so that the exact DN of the entry can be determined and found in the directory. When it is configured, the Certificate Manager automatically creates a set of mappers defining the most common relationships. The default mappers are listed in Table 7.2, "Default mappers" . Table 7.2. Default mappers Mapper Description LdapUserCertMap Locates the correct attribute of user entries in the directory in order to publish user certificates. LdapCrlMap Locates the correct attribute of the CA's entry in the directory in order to publish the CRL. LdapCaCertMap Locates the correct attribute of the CA's entry in the directory in order to publish the CA certificate. To use the default mappers, configure each of the macros by specifying the DN pattern and whether to create the CA entry in the directory. To use other mappers, create and configure an instance of the mapper. For more information, see Section C.2, "Mapper plugin modules" . Log into the Certificate Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing , and then Mappers . The Mappers Management tab, which lists configured mappers, opens on the right. To create a new mapper instance, click Add . The Select Mapper Plugin Implementation window opens, which lists registered mapper modules. Select a module, and edit it. For complete information about these modules, see Section C.2, "Mapper plugin modules" . Edit the mapper instance, and click OK . See Section C.2, "Mapper plugin modules" for detailed information about each mapper. 7.4.4. Completing configuration: rules and enabling After configuring the mappers for LDAP publishing, configure the rules for the published certificates and CRLs, as described in Section 7.5, "Creating rules" . Once the configuration is complete, enable publishing, as described in Section 7.6, "Enabling publishing" . 7.5. Creating rules Rules determine what certificate object is published in what location. Rules work independently, not in tandem. A certificate or CRL that is being published is matched against every rule. Any rule which it matches is activated. In this way, the same certificate or CRL can be published to a file, to an Online Certificate Status Manager, and to an LDAP directory by matching a file-based rule, an OCSP rule, and matching a directory-based rule. Rules can be set for each object type: CA certificates, CRLs, user certificates, and cross-pair certificates. The rules can be more detailed for different kinds of certificates or different kinds of CRLs. The rule first determines if the object matches by matching the type and predicate set up in the rule with the object. Where matching objects are published is determined by the publisher and mapper associated with the rule. Rules are created for each type of certificate the Certificate Manager issues. Modify publishing rules by doing the following: Log into the Certificate Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing , and then Rules . The Rules Management tab, which lists configured rules, opens on the right. To edit an existing rule, select that rule from the list, and click Edit . This opens the Rule Editor window. To create a rule, click Add . This opens the Select Rule Plug-in Implementation window. Select the Rule module. This is the only default module. If any custom modules have been been registered, they are also available. Edit the rule. type . This is the type of certificate for which the rule applies. For a CA signing certificate, the value is cacert . For a cross-signed certificate, the value is xcert . For all other types of certificates, the value is certs . For CRLs, specify crl . predicate . This sets the predicate value for the type of certificate or CRL issuing point to which this rule applies. The predicate values for CRL issuing points, delta CRLs, and certificates are listed in Table 7.3, "Predicate expressions" . enable . mapper . Mappers are not necessary when publishing to a file; they are only needed for LDAP publishing. If this rule is associated with a publisher that publishes to an LDAP directory, select an appropriate mapper here. Leave blank for all other forms of publishing. publisher . Sets the publisher to associate with the rule. The following table lists the predicates that can be used to identify CRL issuing points and delta CRLs and certificate profiles. Table 7.3. Predicate expressions Predicate Type Predicate CRL Issuing Point issuingPointId== Issuing_Point_Instance_ID && isDeltaCRL==[true|false] To publish only the master CRL, set isDeltaCRL==false . To publish only the delta CRL, set isDeltaCRL==true . To publish both, set a rule for the master CRL and another rule for the delta CRL. Certificate Profile profileId== profile_name To publish certificates based on the profile used to issue them, set profileId== to a profile name, such as caServerCert . 7.6. Enabling publishing Publishing can be enabled for only files, only LDAP, or both. Publishing should be enabled after setting up publishers, rules, and mappers. Once enabled, the server attempts to begin publishing. If publishing was not configured correctly before being enabled, publishing may exhibit undesirable behavior or may fail. NOTE Configure CRLs. CRLs must be configured before they can be published. See Chapter 6, Revoking certificates and issuing CRLs . Log into the Certificate Manager Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing . The right pane shows the details for publishing to an LDAP-compliant directory. To enable publishing to a file only, select Enable Publishing (This corresponds to the ca.publish.enable parameter in CS.cfg ). To enable LDAP publishing, select both Enable Publishing and Enable Default LDAP Connection (This corresponds to ca.publish.ldappublish.enable in CS.cfg ). In the Destination section, set the information for the Directory Server instance. Host name . If the Directory Server is configured for SSL client authenticated communication, the name must match the cn component in the subject DN of the Directory Server's SSL server certificate. The hostname can be the fully-qualified domain name or an IPv4 or IPv6 address. Port number . Directory Server Publishing Port Directory Manager DN . This is the distinguished name (DN) of the directory entry that has Directory Manager privileges. The Certificate Manager uses this DN to access the directory tree and to publish to the directory. The access control set up for this DN determines whether the Certificate Manager can perform publishing. It is possible to create another DN that has limited read-write permissions for only those attributes that the publishing system actually needs to write. Password . The CA uses this password to bind to the LDAP directory to which the certificate or CRL is published. The Certificate Manager saves this password in its password.conf file. For example: Note The parameter name that identifies the publishing password ( CA LDAP Publishing ) is set in the Certificate Manager's CS.cfg file in the ca.publish.ldappublish.ldap.ldapauth.bindPWPrompt parameter, and it can be edited. Client certificate . This sets the certificate the Certificate Manager uses for SSL client authentication to the publishing directory. By default, the Certificate Manager uses its SSL server certificate. LDAP version . Select LDAP version 3. Authentication . The way the Certificate Manager authenticates to the Directory Server. The choices are Basic authentication and SSL client authentication . If the Directory Server is configured for basic authentication or for SSL communication without client authentication, select Basic authentication and specify values for the Directory manager DN and password. If the Directory Server is configured for SSL communication with client authentication, select SSL client authentication and the Use SSL communication option, and identify the certificate that the Certificate Manager must use for SSL client authentication to the directory. The server attempts to connect to the Directory Server. If the information is incorrect, the server displays an error message. 7.7. Setting up resumable CRL downloads Certificate System provides an option for interrupted CRL downloads to be resumed smoothly. This is done by publishing the CRLs as a plain file over HTTP. This method of downloading CRLs gives flexibility in retrieving CRLs and lowers overall network congestion. Retrieving CRLs using curl Because CRLs can be published as a text file over HTTP, they can be manually retrieved from the CA using a tool such as curl . The curl command can be used to retrieve a published CRL. For example, to retrieve a full CRL that is newer than the full CRL: Download the latest published CRL file, for example: Convert the binary file to base-64. For example: Use the PrettyPrintCrl tool to convert the base-64 to pretty-print format. For example: 7.8. Publishing cross-pair certificates The cross-pair certificates can be published as a crossCertificatePair entry to an LDAP directory or to a file; this is enabled by default. If this has been disabled, it can be re-enabled through the Certificate Manager Console by doing the following: Open the CA console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the Configuration tab, select the Certificate Manager link in the left pane, then the Publishing link. Click the Rules link under Publishing . This opens the Rules Management pane on the right. If the rule exists and has been disabled, select the enable checkbox. If the rule has been deleted, then click Add and create a new rule. Select xcerts from the type drop-down menu. Make sure the enable checkbox is selected. Select LdapCaCertMap from the mapper drop-down menu. Select LdapCrossCertPairPublisher from the publisher drop-down menu. The mapper and publisher specified in the publishing rule are both listed under Mapper and Publisher under the Publishing link in the left navigation window of the CA Console. The mapper, LdapCaCertMap , by default designates that the crossCertificatePair be stored to the LdapCaSimpleMap LDAP entry. The publisher, LDAPCrossPairPublisher , by default sets the attribute to store the cross-pair certificate in the CA entry to crossCertificatePair;binary . 7.9. Testing publishing to files To verify that the Certificate Manager is publishing certificates and CRLs correctly to file: Open the CA's end-entities page, and request a certificate. Approve the request through the agent services page, if required. Retrieve the certificate from the end-entities page, and download the certificate into the browser. Check whether the server generated the DER-encoded file containing the certificate. Open the directory to which the binary blob of the certificate is supposed to be published. The certificate file should be named cert- serial_number .der . Convert the DER-encoded certificate to its base 64-encoded format using the Binary to ASCII tool. For more information on this tool, refer to the BtoA(1) man page. input_file sets the path to the file that contains the DER-encoded certificate, and output_file sets the path to the file to write the base-64 encoded certificate. Open the ASCII file; the base-64 encoded certificate is similar to the one shown: Convert the base 64-encoded certificate to a readable form using the Pretty Print Certificate tool. For more information on this tool, refer to the PrettyPrintCert(1) man page. input_file sets the path to the ASCII file that contains the base-64 encoded certificate, and output_file , optionally, sets the path to the file to write the certificate. If an output file is not set, the certificate information is written to the standard output. Compare the output with the certificate issued; check the serial number in the certificate with the one used in the filename. If everything matches, the Certificate Manager is configured correctly to publish certificates to file. Revoke the certificate. Check whether the server generated the DER-encoded file containing the CRL. Open the directory to which the server is to publish the CRL as a binary blob. The CRL file should have a name in the form crl- this_update .der . this_update specifies the value derived from the time-dependent This Update variable of the CRL. Convert the DER-encoded CRL to its base 64-encoded format using the Binary to ASCII tool. Convert the base 64-encoded CRL to readable form using the Pretty Print CRL tool. Compare the output. 7.10. Viewing certificates and CRLs published to file Certificates and CRLs can be published to two types of files: base-64 encoded or DER-encoded. The content of these files can be viewed by converting the files to pretty-print format using the dumpasn1 tool or the PrettyPrintCert or PrettyPrintCrl tool. To view the content in a base-64 encoded file: Convert the base-64 file to binary. For example: Use the PrettyPrintCert or PrettyPrintCrl tool to convert the binary file to pretty-print format. For example: To view the content of a DER-encoded file, simply run the dumpasn1 , PrettyPrintCert , or PrettyPrintCrl tool with the DER-encoded file. For example: 7.11. Updating certificates and CRLs in a directory The Certificate Manager and the publishing directory can become out of sync if certificates are issued or revoked while the Directory Server is down. Certificates that were issued or revoked need to be published or unpublished manually when the Directory Server comes back up. To find certificates that are out of sync with the directory - valid certificates that are not in the directory and revoked or expired certificates that are still in the directory - the Certificate Manager keeps a record of whether a certificate in its internal database has been published to the directory. If the Certificate Manager and the publishing directory become out of sync, use the Update Directory option in the Certificate Manager agent services page to synchronize the publishing directory with the internal database. The following choices are available for synchronizing the directory with the internal database: Search the internal database for certificates that are out of sync and publish or unpublish. Publish certificates that were issued while the Directory Server was down. Similarly, unpublish certificates that were revoked or that expired while Directory Server was down. Publish or unpublish a range of certificates based on serial numbers, from serial number xx to serial number yy . A Certificate Manager's publishing directory can be manually updated by a Certificate Manager agent only. 7.11.1. Manually updating certificates in the directory The Update Directory Server form in the Certificate Manager agent services page can be used to update the directory manually with certificate-related information. This form initiates a combination of the following operations: Update the directory with certificates. Remove expired certificates from the directory. Removing expired certificates from the publishing directory can be automated by scheduling an automated job. Remove revoked certificates from the directory. Manually update the directory with changes by doing the following: Open the Certificate Manager agent services page. Select the Update Directory Server link. Select the appropriate options, and click Update Directory . The Certificate Manager starts updating the directory with the certificate information in its internal database. If the changes are substantial, updating the directory can take considerable time. During this period, any changes made through the Certificate Manager, including any certificates issued or any certificates revoked, may not be included in the update. If any certificates are issued or revoked while the directory is updated, update the directory again to reflect those changes. When the directory update is complete, the Certificate Manager displays a status report. If the process is interrupted, the server logs an error message. If the Certificate Manager is installed as a root CA, the CA signing certificate may get published using the publishing rule set up for user certificates when using the agent interface to update the directory with valid certificates. This may return an object class violation error or other errors in the mapper. Selecting the appropriate serial number range to exclude the CA signing certificate can avoid this problem. The CA signing certificate is the first certificate a root CA issues. Modify the default publishing rule for user certificates by changing the value of the predicate parameter to profileId!=caCACert . Use the LdapCaCertPublisher publisher plugin module to add another rule, with the predicate parameter set to profileId=caCACert , for publishing subordinate CA certificates. 7.11.2. Manually updating the crl in the directory The Certificate Revocation List form in the Certificate Manager agent services page manually updates the directory with CRL-related information. Manually update the CRL information by doing the following: Open the Certificate Manager agent services page. Select Update Revocation List . Click Update . The Certificate Manager starts updating the directory with the CRL in its internal database. If the CRL is large, updating the directory takes considerable time. During this period, any changes made to the CRL may not be included in the update. When the directory is updated, the Certificate Manager displays a status report. If the process is interrupted, the server logs an error message. | [
"pki-server status <CA Instance name>",
"pkiconsole -d <location of CA Admin Cert nssdb> https://server.example.com:_<CA Console Port>_/ca",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"CA LDAP Publishing:password",
"curl -k -o MasterCRL.bin -s \"https://rhcs10.example.com:21443/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL\"",
"BtoA MasterCRL.bin MasterCRL.b64",
"PrettyPrintCrl MasterCRL.b64",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"BtoA input_file output_file",
"-----BEGIN CERTIFICATE----- MMIIBtgYJYIZIAYb4QgIFoIIBpzCCAZ8wggGbMIIBRaADAgEAAgEBMA0GCSqGSIb3DQEBBAUAMFcxC AJBgNVBAYTAlVTMSwwKgYDVQQKEyNOZXRzY2FwZSBDb21tdW5pY2F0aWhfyyuougjgjjgmkgjkgmjg fjfgjjjgfyjfyj9ucyBDb3Jwb3JhdGlvbjpMEaMBgGA1UECxMRSXNzdWluZyhgdfhbfdpffjphotoo gdhkBBdXRob3JpdHkwHhcNOTYxMTA4MDkwNzM0WhcNOTgxMTA4MDkwNzMM0WjBXMQswCQYDVQQGEwJ VUzEsMCoGA1UEChMjTmV0c2NhcGUgQ29tbXVuaWNhdGlvbnMgQ29ycG9yY2F0aW9ucyBDb3Jwb3Jhd GlvbjpMEaMBgGA1UECxMRSXNzdWluZyBBdXRob3JpdHkwHh -----END CERTIFICATE-----",
"PrettyPrintCert input_file [output_file]",
"BtoA input_file output_file",
"PrettyPrintCrl input_file [output_file]",
"AtoB /tmp/example.b64 /tmp/example.bin",
"PrettyPrintCert example.bin example.cert",
"PrettyPrintCrl example.der example.crl"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/publishing |
Chapter 8. Deploying on OpenStack with rootVolume and etcd on local disk | Chapter 8. Deploying on OpenStack with rootVolume and etcd on local disk Important Deploying on Red Hat OpenStack Platform (RHOSP) with rootVolume and etcd on local disk is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . As a day 2 operation, you can resolve and prevent performance issues of your Red Hat OpenStack Platform (RHOSP) installation by moving etcd from a root volume (provided by OpenStack Cinder) to a dedicated ephemeral local disk. 8.1. Deploying RHOSP on local disk If you have an existing RHOSP cloud, you can move etcd from that cloud to a dedicated ephemeral local disk. Warning This procedure is for testing etcd on a local disk only and should not be used on production clusters. In certain cases, complete loss of the control plane can occur. For more information, see "Overview of backup and restore operation" under "Backup and restore". Prerequisites You have an OpenStack cloud with a working Cinder. Your OpenStack cloud has at least 75 GB of available storage to accommodate 3 root volumes for the OpenShift control plane. The OpenStack cloud is deployed with Nova ephemeral storage that uses a local storage backend and not rbd . Procedure Create a Nova flavor for the control plane with at least 10 GB of ephemeral disk by running the following command, replacing the values for --ram , --disk , and <flavor_name> based on your environment: USD openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name> Deploy a cluster with root volumes for the control plane; for example: Example YAML file # ... controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3 # ... Deploy the cluster you created by running the following command: USD openshift-install create cluster --dir <installation_directory> 1 1 For <installation_directory> , specify the location of the customized ./install-config.yaml file that you previously created. Verify that the cluster you deployed is healthy before proceeding to the step by running the following command: USD oc wait clusteroperators --all --for=condition=Progressing=false 1 1 Ensures that the cluster operators are finished progressing and that the cluster is not deploying or updating. Edit the ControlPlaneMachineSet (CPMS) to add the additional block ephemeral device that is used by etcd by running the following command: USD oc patch ControlPlaneMachineSet/cluster -n openshift-machine-api --type json -p ' 1 [ { "op": "add", "path": "/spec/template/machines_v1beta1_machine_openshift_io/spec/providerSpec/value/additionalBlockDevices", 2 "value": [ { "name": "etcd", "sizeGiB": 10, "storage": { "type": "Local" 3 } } ] } ] ' 1 Applies the JSON patch to the ControlPlaneMachineSet custom resource (CR). 2 Specifies the path where the additionalBlockDevices are added. 3 Adds the etcd devices with at least local storage of 10 GB to the cluster. You can specify values greater than 10 GB as long as the etcd device fits the Nova flavor. For example, if the Nova flavor has 15 GB, you can create the etcd device with 12 GB. Verify that the control plane machines are healthy by using the following steps: Wait for the control plane machine set update to finish by running the following command: USD oc wait --timeout=90m --for=condition=Progressing=false controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster Verify that the 3 control plane machine sets are updated by running the following command: USD oc wait --timeout=90m --for=jsonpath='{.status.updatedReplicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster Verify that the 3 control plane machine sets are healthy by running the following command: USD oc wait --timeout=90m --for=jsonpath='{.status.replicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster Verify that the ClusterOperators are not progressing in the cluster by running the following command: USD oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false Verify that each of the 3 control plane machines has the additional block device you previously created by running the following script: USD cp_machines=USD(oc get machines -n openshift-machine-api --selector='machine.openshift.io/cluster-api-machine-role=master' --no-headers -o custom-columns=NAME:.metadata.name) 1 if [[ USD(echo "USD{cp_machines}" | wc -l) -ne 3 ]]; then exit 1 fi 2 for machine in USD{cp_machines}; do if ! oc get machine -n openshift-machine-api "USD{machine}" -o jsonpath='{.spec.providerSpec.value.additionalBlockDevices}' | grep -q 'etcd'; then exit 1 fi 3 done 1 Retrieves the control plane machines running in the cluster. 2 Iterates over machines which have an additionalBlockDevices entry with the name etcd . 3 Outputs the name of every control plane machine which has an additionalBlockDevice named etcd . Create a file named 98-var-lib-etcd.yaml by using the following YAML file: Warning This procedure is for testing etcd on a local disk and should not be used on a production cluster. In certain cases, complete loss of the control plane can occur. For more information, see "Overview of backup and restore operation" under "Backup and restore". apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c "[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\x2dlabel-local\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c "if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c "[ -n \"USD(restorecon -nv /var/lib/etcd)\" ]" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service 1 The etcd database must be mounted by the device, not a label, to ensure that systemd generates the device dependency used in this config to trigger filesystem creation. 2 Do not run if the file system dev/disk/by-label/local-etcd already exists. 3 Fails with an alert message if /dev/disk/by-label/ephemeral0 doesn't exist. 4 Migrates existing data to local etcd database. This config does so after /var/lib/etcd is mounted, but before CRI-O starts so etcd is not running yet. 5 Requires that etcd is mounted and does not contain a member directory, but the ostree does. 6 Cleans up any migration state. 7 Copies and moves in separate steps to ensure atomic creation of a complete member directory. 8 Performs a quick check of the mount point directory before performing a full recursive relabel. If restorecon in the file path /var/lib/etcd cannot rename the directory, the recursive rename is not performed. Create the new MachineConfig object by running the following command: USD oc create -f 98-var-lib-etcd.yaml Note Moving the etcd database onto the local disk of each control plane machine takes time. Verify that the etcd databases has been transferred to the local disk of each control plane by running the following commands: Verify that the cluster is still updating by running the following command: USD oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master Verify that the cluster is ready by running the following command: USD oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s Verify that the cluster Operators are running in the cluster by running the following command: USD oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false 8.2. Additional resources Recommended etcd practices Overview of backup and restore options | [
"openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name>",
"controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3",
"openshift-install create cluster --dir <installation_directory> 1",
"oc wait clusteroperators --all --for=condition=Progressing=false 1",
"oc patch ControlPlaneMachineSet/cluster -n openshift-machine-api --type json -p ' 1 [ { \"op\": \"add\", \"path\": \"/spec/template/machines_v1beta1_machine_openshift_io/spec/providerSpec/value/additionalBlockDevices\", 2 \"value\": [ { \"name\": \"etcd\", \"sizeGiB\": 10, \"storage\": { \"type\": \"Local\" 3 } } ] } ] '",
"oc wait --timeout=90m --for=condition=Progressing=false controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster",
"oc wait --timeout=90m --for=jsonpath='{.status.updatedReplicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster",
"oc wait --timeout=90m --for=jsonpath='{.status.replicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster",
"oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false",
"cp_machines=USD(oc get machines -n openshift-machine-api --selector='machine.openshift.io/cluster-api-machine-role=master' --no-headers -o custom-columns=NAME:.metadata.name) 1 if [[ USD(echo \"USD{cp_machines}\" | wc -l) -ne 3 ]]; then exit 1 fi 2 for machine in USD{cp_machines}; do if ! oc get machine -n openshift-machine-api \"USD{machine}\" -o jsonpath='{.spec.providerSpec.value.additionalBlockDevices}' | grep -q 'etcd'; then exit 1 fi 3 done",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )\" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\\x2dlabel-local\\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi\" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c \"[ -n \\\"USD(restorecon -nv /var/lib/etcd)\\\" ]\" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service",
"oc create -f 98-var-lib-etcd.yaml",
"oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master",
"oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s",
"oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_openstack/deploying-openstack-on-local-disk |
2.7. GNOME Power Manager | 2.7. GNOME Power Manager GNOME Power Manager is a daemon that is installed as part of the GNOME desktop environment. Much of the power-management functionality that GNOME Power Manager provided in earlier versions of Red Hat Enterprise Linux has become part of the DeviceKit-power tool in Red Hat Enterprise Linux 6, renamed to UPower in Red Hat Enterprise Linux 7 (see Section 2.6, "UPower" ). However, GNOME Power Manager remains a front end for that functionality. Through an applet in the system tray, GNOME Power Manager notifies you of changes in your system's power status; for example, a change from battery to AC power. It also reports battery status, and warns you when battery power is low. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/gnome-power-manager |
Chapter 2. Configuring your firewall | Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. Additional resources OpenID Connect requirements for AWS STS | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installation_configuration/configuring-firewall |
2.13. The Effect of Storage Domain Actions on Storage Capacity | 2.13. The Effect of Storage Domain Actions on Storage Capacity Power on, power off, and reboot a stateless virtual machine These three processes affect the copy-on-write (COW) layer in a stateless virtual machine. For more information, see the Stateless row of the Virtual Machine General Settings table in the Virtual Machine Management Guide . Create a storage domain Creating a block storage domain results in files with the same names as the seven LVs shown below, and initially should take less capacity. ids 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-ao---- 128.00m inbox 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 128.00m leases 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 2.00g master 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-ao---- 1.00g metadata 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 512.00m outbox 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 128.00m xleases 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 1.00g Delete a storage domain Deleting a storage domain frees up capacity on the disk by the same of amount of capacity the process deleted. Migrate a storage domain Migrating a storage domain does not use additional storage capacity. For more information about migrating storage domains, see Migrating Storage Domains Between Data Centers in the Same Environment in the Administration Guide . Move a virtual disk to other storage domain Migrating a virtual disk requires enough free space to be available on the target storage domain. You can see the target domain's approximate free space in the Administration Portal. The storage types in the move process affect the visible capacity. For example, if you move a preallocated disk from block storage to file storage, the resulting free space may be considerably smaller than the initial free space. Live migrating a virtual disk to another storage domain also creates a snapshot, which is automatically merged after the migration is complete. To learn more about moving virtual disks, see Moving a Virtual Disk in the Administration Guide . Pause a storage domain Pausing a storage domain does not use any additional storage capacity. Create a snapshot of a virtual machine Creating a snapshot of a virtual machine can affect the storage domain capacity. Creating a live snapshot uses memory snapshots by default and generates two additional volumes per virtual machine. The first volume is the sum of the memory, video memory, and 200 MB of buffer. The second volume contains the virtual machine configuration, which is several MB in size. When using block storage, rounding up occurs to the nearest unit Red Hat Virtualization can provide. Creating an offline snapshot initially consumes 1 GB of block storage and is dynamic up to the size of the disk. Cloning a snapshot creates a new disk the same size as the original disk. Committing a snapshot removes all child volumes, depending on where in the chain the commit occurs. Deleting a snapshot eventually removes the child volume for each disk and is only supported with a running virtual machine. Previewing a snapshot creates a temporary volume per disk, so sufficient capacity must be available to allow the creation of the preview. Undoing a snapshot preview removes the temporary volume created by the preview. Attach and remove direct LUNs Attaching and removing direct LUNs does not affect the storage domain since they are not a storage domain component. For more information, see Overview of Live Storage Migration in the Administration Guide . | [
"ids 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-ao---- 128.00m inbox 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 128.00m leases 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 2.00g master 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-ao---- 1.00g metadata 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 512.00m outbox 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 128.00m xleases 64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 1.00g"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/the_effect_of_storage_domain_actions_on_storage_capacity |
Chapter 1. Firewall Rules for Red Hat OpenStack Platform | Chapter 1. Firewall Rules for Red Hat OpenStack Platform This document includes a link to the Red Hat OpenStack Platform (RHOSP) network flow matrix. Use this information to help you define firewall rules. The matrix lists RHOSP core services and their dependencies and describes the ports and protocols they use and the associated traffic flows. It includes the following columns: Service The OpenStack service. Protocol Transmission protocol. Dest. Port Destination port. Source Object Source of data. Dest. Object Destination of data. Source/Dest Pairs Valid source and destination pairs. Dest. Network Destination network. ServiceNetMap Parent Determines the network type used for each service. Traffic Description Notes about the traffic flow. 1.1. Using the Red Hat OpenStack Network Flow Matrix The network flow matrix is a comma separated values (CSV) file that describes flows to and from Red Hat OpenStack Platform (RHOSP) services. Note The network flow matrix describes common traffic flows. It does not describe every possible service and flow. Some flows that are not described in this matrix might be critical to operation. For example, if you block all traffic and then selectively open only the flows described here, you might unintentionally block a necessary flow. That could cause issues that are difficult to troubleshoot. Procedure Use the following link to download the matrix: Red Hat OpenStack Network Flows . For example, right click the link and choose Save link as . Ensure that the downloaded file has the .csv filename extension. For example, if it is .txt , change it to .csv . Use the information in the file to help you formulate firewall rules. You can open it in a spreadsheet application that accepts .csv files, or access it with your own program. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/firewall_rules_for_red_hat_openstack_platform/firewall-rules |
Chapter 326. SQL Component | Chapter 326. SQL Component Available as of Camel version 1.4 The sql: component allows you to work with databases using JDBC queries. The difference between this component and JDBC component is that in case of SQL the query is a property of the endpoint and it uses message payload as parameters passed to the query. This component uses spring-jdbc behind the scenes for the actual SQL handling. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sql</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> The SQL component also supports: JDBC-based repository for the Idempotent Consumer EIP pattern. See Section 326.13, "Using the JDBC-based idempotent repository" . JDBC-based repository for the Aggregator EIP pattern. See Section 326.14, "Using the JDBC-based aggregation repository" . 326.1. URI format WARNING:From Camel 2.11 onwards this component can create both consumer (e.g. from() ) and producer endpoints (e.g. to() ). In versions, it could only act as a producer. INFO:This component can be used as a Transactional Client . The SQL component uses the following endpoint URI notation: sql:select * from table where id=# order by name[?options] From Camel 2.11 onwards you can use named parameters by using :`#name_of_the_parameter` style as shown: sql:select * from table where id=:#myId order by name[?options] When using named parameters, Camel will lookup the names from, in the given precedence: 1. From message body if its a java.util.Map 2. From message headers If a named parameter cannot be resolved, then an exception is thrown. From Camel 2.14 onward you can use Simple expressions as parameters as shown: sql:select * from table where id=:#USD{property.myId} order by name[?options] Notice that the standard ? symbol that denotes the parameters to an SQL query is substituted with the # symbol, because the ? symbol is used to specify options for the endpoint. The ? symbol replacement can be configured on endpoint basis. From Camel 2.17 onwards you can externalize your SQL queries to files in the classpath or file system as shown: sql:classpath:sql/myquery.sql[?options] And the myquery.sql file is in the classpath and is just a plain text -- this is a comment select * from table where id = :#USD{property.myId} order by name In the file you can use multilines and format the SQL as you wish. And also use comments such as the - dash line. You can append query options to the URI in the following format, ?option=value&option=value&... 326.2. Options The SQL component supports 3 options, which are listed below. Name Description Default Type dataSource (common) Sets the DataSource to use to communicate with the database. DataSource usePlaceholder (advanced) Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true true boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The SQL endpoint is configured using URI syntax: with the following path and query parameters: 326.2.1. Path Parameters (1 parameters): Name Description Default Type query Required Sets the SQL query to perform. You can externalize the query by using file: or classpath: as prefix and specify the location of the file. String 326.2.2. Query Parameters (45 parameters): Name Description Default Type allowNamedParameters (common) Whether to allow using named parameters in the queries. true boolean dataSource (common) Sets the DataSource to use to communicate with the database. DataSource dataSourceRef (common) Deprecated Sets the reference to a DataSource to lookup from the registry, to use for communicating with the database. String outputClass (common) Specify the full package and class name to use as conversion when outputType=SelectOne or SelectList. String outputHeader (common) Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. String outputType (common) Make the output of consumer or producer to SelectList as List of Map, or SelectOne as single Java object in the following way: a) If the query has only single column, then that JDBC Column object is returned. (such as SELECT COUNT( ) FROM PROJECT will return a Long object. b) If the query has more than one column, then it will return a Map of that result. c) If the outputClass is set, then it will convert the query result into an Java bean object by calling all the setters that match the column names. It will assume your class has a default constructor to create instance with. d) If the query resulted in more than one rows, it throws an non-unique result exception. SelectList SqlOutputType separator (common) The separator to use when parameter values is taken from message body (if the body is a String type), to be inserted at # placeholders. Notice if you use named parameters, then a Map type is used instead. The default value is comma. , char breakBatchOnConsumeFail (consumer) Sets whether to break batch if onConsume failed. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean expectedUpdateCount (consumer) Sets an expected update count to validate when using onConsume. -1 int maxMessagesPerPoll (consumer) Sets the maximum number of messages to poll int onConsume (consumer) After processing each row then this query can be executed, if the Exchange was processed successfully, for example to mark the row as processed. The query can have parameter. String onConsumeBatchComplete (consumer) After processing the entire batch, this query can be executed to bulk update rows etc. The query cannot have parameters. String onConsumeFailed (consumer) After processing each row then this query can be executed, if the Exchange failed, for example to mark the row as failed. The query can have parameter. String routeEmptyResultSet (consumer) Sets whether empty resultset should be allowed to be sent to the hop. Defaults to false. So the empty resultset will be filtered out. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean transacted (consumer) Enables or disables transaction. If enabled then if processing an exchange failed then the consumer break out processing any further exchanges to cause a rollback eager false boolean useIterator (consumer) Sets how resultset should be delivered to route. Indicates delivery as either a list or individual object. defaults to true. true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy processingStrategy (consumer) Allows to plugin to use a custom org.apache.camel.component.sql.SqlProcessingStrategy to execute queries when the consumer has processed the rows/batch. SqlProcessingStrategy batch (producer) Enables or disables batch mode false boolean noop (producer) If set, will ignore the results of the SQL query and use the existing IN message as the OUT message for the continuation of processing false boolean useMessageBodyForSql (producer) Whether to use the message body as the SQL and then headers for parameters. If this option is enabled then the SQL in the uri is not used. false boolean alwaysPopulateStatement (advanced) If enabled then the populateStatement method from org.apache.camel.component.sql.SqlPrepareStatementStrategy is always invoked, also if there is no expected parameters to be prepared. When this is false then the populateStatement is only invoked if there is 1 or more expected parameters to be set; for example this avoids reading the message body/headers for SQL queries with no parameters. false boolean parametersCount (advanced) If set greater than zero, then Camel will use this count value of parameters to replace instead of querying via JDBC metadata API. This is useful if the JDBC vendor could not return correct parameters count, then user may override instead. int placeholder (advanced) Specifies a character that will be replaced to in SQL query. Notice, that it is simple String.replaceAll() operation and no SQL parsing is involved (quoted strings will also change). # String prepareStatementStrategy (advanced) Allows to plugin to use a custom org.apache.camel.component.sql.SqlPrepareStatementStrategy to control preparation of the query and prepared statement. SqlPrepareStatement Strategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean templateOptions (advanced) Configures the Spring JdbcTemplate with the key/values from the Map Map usePlaceholder (advanced) Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true true boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 326.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.sql.data-source Sets the DataSource to use to communicate with the database. The option is a javax.sql.DataSource type. String camel.component.sql.enabled Enable sql component true Boolean camel.component.sql.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.sql.use-placeholder Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true true Boolean 326.4. Treatment of the message body The SQL component tries to convert the message body to an object of java.util.Iterator type and then uses this iterator to fill the query parameters (where each query parameter is represented by a # symbol (or configured placeholder) in the endpoint URI). If the message body is not an array or collection, the conversion results in an iterator that iterates over only one object, which is the body itself. For example, if the message body is an instance of java.util.List , the first item in the list is substituted into the first occurrence of # in the SQL query, the second item in the list is substituted into the second occurrence of # , and so on. If batch is set to true , then the interpretation of the inbound message body changes slightly - instead of an iterator of parameters, the component expects an iterator that contains the parameter iterators; the size of the outer iterator determines the batch size. From Camel 2.16 onwards you can use the option useMessageBodyForSql that allows to use the message body as the SQL statement, and then the SQL parameters must be provided in a header with the key SqlConstants.SQL_PARAMETERS. This allows the SQL component to work more dynamic as the SQL query is from the message body. 326.5. Result of the query For select operations, the result is an instance of List<Map<String, Object>> type, as returned by the JdbcTemplate.queryForList() method. For update operations, a NULL body is returned as the update operation is only set as a header and never as a body. Note See Header values for more information on the update operation. By default, the result is placed in the message body. If the outputHeader parameter is set, the result is placed in the header. This is an alternative to using a full message enrichment pattern to add headers, it provides a concise syntax for querying a sequence or some other small value into a header. It is convenient to use outputHeader and outputType together: from("jms:order.inbox") .to("sql:select order_seq.nextval from dual?outputHeader=OrderId&outputType=SelectOne") .to("jms:order.booking"); 326.6. Using StreamList From*Camel 2.18* onwards the producer supports outputType=StreamList that uses an iterator to stream the output of the query. This allows to process the data in a streaming fashion which for example can be used by the Splitter EIP to process each row one at a time, and load data from the database as needed. from("direct:withSplitModel") .to("sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel") .to("log:stream") .split(body()).streaming() .to("log:row") .to("mock:result") .end(); 326.7. Header values When performing update operations, the SQL Component stores the update count in the following message headers: Header Description CamelSqlUpdateCount The number of rows updated for update operations, returned as an Integer object. This header is not provided when using outputType=StreamList. CamelSqlRowCount The number of rows returned for select operations, returned as an Integer object. This header is not provided when using outputType=StreamList. CamelSqlQuery Camel 2.8: Query to execute. This query takes precedence over the query specified in the endpoint URI. Note that query parameters in the header are represented by a ? instead of a # symbol When performing insert operations, the SQL Component stores the rows with the generated keys and number of these rown in the following message headers ( Available as of Camel 2.12.4, 2.13.1 ): Header Description CamelSqlGeneratedKeysRowCount The number of rows in the header that contains generated keys. CamelSqlGeneratedKeyRows Rows that contains the generated keys (a list of maps of keys). 326.8. Generated keys *Available as of Camel 2.12.4, 2.13.1 and 2.14 * If you insert data using SQL INSERT, then the RDBMS may support auto generated keys. You can instruct the SQL producer to return the generated keys in headers. To do that set the header CamelSqlRetrieveGeneratedKeys=true . Then the generated keys will be provided as headers with the keys listed in the table above. You can see more details in this unit test . 326.9. DataSource You can now set a reference to a DataSource in the URI directly: sql:select * from table where id=# order by name?dataSource=myDS 326.10. Sample In the sample below we execute a query and retrieve the result as a List of rows, where each row is a Map<String, Object> and the key is the column name. First, we set up a table to use for our sample. As this is based on an unit test, we do it in java: The SQL script createAndPopulateDatabase.sql we execute looks like as described below: Then we configure our route and our sql component. Notice that we use a direct endpoint in front of the sql endpoint. This allows us to send an exchange to the direct endpoint with the URI, direct:simple , which is much easier for the client to use than the long sql: URI. Note that the DataSource is looked up up in the registry, so we can use standard Spring XML to configure our DataSource . And then we fire the message into the direct endpoint that will route it to our sql component that queries the database. We could configure the DataSource in Spring XML as follows: <jee:jndi-lookup id="myDS" jndi-name="jdbc/myDataSource"/> 326.10.1. Using named parameters Available as of Camel 2.11 In the given route below, we want to get all the projects from the projects table. Notice the SQL query has 2 named parameters, :#lic and :#min. Camel will then lookup for these parameters from the message body or message headers. Notice in the example above we set two headers with constant value for the named parameters: from("direct:projects") .setHeader("lic", constant("ASF")) .setHeader("min", constant(123)) .to("sql:select * from projects where license = :#lic and id > :#min order by id") Though if the message body is a java.util.Map then the named parameters will be taken from the body. from("direct:projects") .to("sql:select * from projects where license = :#lic and id > :#min order by id") 326.11. Using expression parameters in producers Available as of Camel 2.14 In the given route below, we want to get all the project from the database. It uses the body of the exchange for defining the license and uses the value of a property as the second parameter. from("direct:projects") .setBody(constant("ASF")) .setProperty("min", constant(123)) .to("sql:select * from projects where license = :#USD{body} and id > :#USD{property.min} order by id") 326.11.1. Using expression parameters in consumers Available as of Camel 2.23 When using the SQL component as consumer, you can now also use expression parameters (simple language) to build dynamic query parameters, such as calling a method on a bean to retrieve an id, date or something. For example in the sample below we call the nextId method on the bean myIdGenerator: from("sql:select * from projects where id = :#USD{bean:myIdGenerator.nextId}") .to("mock:result"); And the bean has the following method: public static class MyIdGenerator { private int id = 1; public int nextId() { return id++; } Notice that there is no existing Exchange with message body and headers, so the simple expression you can use in the consumer are most useable for calling bean methods as in this example. 326.12. Using IN queries with dynamic values Available as of Camel 2.17 From Camel 2.17 onwards the SQL producer allows to use SQL queries with IN statements where the IN values is dynamic computed. For example from the message body or a header etc. To use IN you need to: prefix the parameter name with in: add ( ) around the parameter An example explains this better. The following query is used: -- this is a comment select * from projects where project in (:#in:names) order by id In the following route: from("direct:query") .to("sql:classpath:sql/selectProjectsIn.sql") .to("log:query") .to("mock:query"); Then the IN query can use a header with the key names with the dynamic values such as: // use an array template.requestBodyAndHeader("direct:query", "Hi there!", "names", new String[]{"Camel", "AMQ"}); // use a list List<String> names = new ArrayList<String>(); names.add("Camel"); names.add("AMQ"); template.requestBodyAndHeader("direct:query", "Hi there!", "names", names); // use a string separated values with comma template.requestBodyAndHeader("direct:query", "Hi there!", "names", "Camel,AMQ"); The query can also be specified in the endpoint instead of being externalized (notice that externalizing makes maintaining the SQL queries easier) from("direct:query") .to("sql:select * from projects where project in (:#in:names) order by id") .to("log:query") .to("mock:query"); 326.13. Using the JDBC-based idempotent repository Available as of Camel 2.7 : In this section we will use the JDBC based idempotent repository. TIP:*Abstract class* From Camel 2.9 onwards there is an abstract class org.apache.camel.processor.idempotent.jdbc.AbstractJdbcMessageIdRepository you can extend to build custom JDBC idempotent repository. First we have to create the database table which will be used by the idempotent repository. For Camel 2.7 , we use the following schema: CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100) ) In Camel 2.8 , we added the createdAt column: CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100), createdAt TIMESTAMP ) WARNING:The SQL Server TIMESTAMP type is a fixed-length binary-string type. It does not map to any of the JDBC time types: DATE , TIME , or TIMESTAMP . Customize the JdbcMessageIdRepository Starting with Camel 2.9.1 you have a few options to tune the org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository for your needs: Parameter Default Value Description createTableIfNotExists true Defines whether or not Camel should try to create the table if it doesn't exist. tableExistsString SELECT 1 FROM CAMEL_MESSAGEPROCESSED WHERE 1 = 0 This query is used to figure out whether the table already exists or not. It must throw an exception to indicate the table doesn't exist. createString CREATE TABLE CAMEL_MESSAGEPROCESSED (processorName VARCHAR(255), messageId VARCHAR(100), createdAt TIMESTAMP) The statement which is used to create the table. queryString SELECT COUNT(*) FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ? The query which is used to figure out whether the message already exists in the repository (the result is not equals to '0'). It takes two parameters. This first one is the processor name ( String ) and the second one is the message id ( String ). insertString INSERT INTO CAMEL_MESSAGEPROCESSED (processorName, messageId, createdAt) VALUES (?, ?, ?) The statement which is used to add the entry into the table. It takes three parameter. The first one is the processor name ( String ), the second one is the message id ( String ) and the third one is the timestamp ( java.sql.Timestamp ) when this entry was added to the repository. deleteString DELETE FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ? The statement which is used to delete the entry from the database. It takes two parameter. This first one is the processor name ( String ) and the second one is the message id ( String ). 326.14. Using the JDBC-based aggregation repository Available as of Camel 2.6 INFO: Using JdbcAggregationRepository in Camel 2.6 In Camel 2.6, the JdbcAggregationRepository is provided in the camel-jdbc-aggregator component. From Camel 2.7 onwards, the JdbcAggregationRepository is provided in the camel-sql component. JdbcAggregationRepository is an AggregationRepository which on the fly persists the aggregated messages. This ensures that you will not loose messages, as the default aggregator will use an in memory only AggregationRepository . The JdbcAggregationRepository allows together with Camel to provide persistent support for the Aggregator. Only when an Exchange has been successfully processed it will be marked as complete which happens when the confirm method is invoked on the AggregationRepository . This means if the same Exchange fails again it will be kept retried until it success. You can use option maximumRedeliveries to limit the maximum number of redelivery attempts for a given recovered Exchange. You must also set the deadLetterUri option so Camel knows where to send the Exchange when the maximumRedeliveries was hit. You can see some examples in the unit tests of camel-sql, for example this test . 326.14.1. Database To be operational, each aggregator uses two table: the aggregation and completed one. By convention the completed has the same name as the aggregation one suffixed with "_COMPLETED" . The name must be configured in the Spring bean with the RepositoryName property. In the following example aggregation will be used. The table structure definition of both table are identical: in both case a String value is used as key ( id ) whereas a Blob contains the exchange serialized in byte array. However one difference should be remembered: the id field does not have the same content depending on the table. In the aggregation table id holds the correlation Id used by the component to aggregate the messages. In the completed table, id holds the id of the exchange stored in corresponding the blob field. Here is the SQL query used to create the tables, just replace "aggregation" with your aggregator repository name. CREATE TABLE aggregation ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_pk PRIMARY KEY (id) ); CREATE TABLE aggregation_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_completed_pk PRIMARY KEY (id) ); 326.15. Storing body and headers as text Available as of Camel 2.11 You can configure the JdbcAggregationRepository to store message body and select(ed) headers as String in separate columns. For example to store the body, and the following two headers companyName and accountName use the following SQL: CREATE TABLE aggregationRepo3 ( id varchar(255) NOT NULL, exchange blob NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_pk PRIMARY KEY (id) ); CREATE TABLE aggregationRepo3_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_completed_pk PRIMARY KEY (id) ); And then configure the repository to enable this behavior as shown below: <bean id="repo3" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="repositoryName" value="aggregationRepo3"/> <property name="transactionManager" ref="txManager3"/> <property name="dataSource" ref="dataSource3"/> <!-- configure to store the message body and following headers as text in the repo --> <property name="storeBodyAsText" value="true"/> <property name="headersToStoreAsText"> <list> <value>companyName</value> <value>accountName</value> </list> </property> </bean> 326.15.1. Codec (Serialization) Because they can contain any type of payload, Exchanges are not serializable by design. It is converted into a byte array to be stored in a database BLOB field. All those conversions are handled by the JdbcCodec class. One detail of the code requires your attention: the ClassLoadingAwareObjectInputStream . The ClassLoadingAwareObjectInputStream has been reused from the Apache ActiveMQ project. It wraps an ObjectInputStream and use it with the ContextClassLoader rather than the currentThread one. The benefit is to be able to load classes exposed by other bundles. This allows the exchange body and headers to have custom types object references. 326.15.2. Transaction A Spring PlatformTransactionManager is required to orchestrate transactions. 326.15.2.1. Service (Start/Stop) The start method verifies the connection of the database and the presence of the required tables. If anything is wrong it will fail during starting. 326.15.3. Aggregator configuration Depending on the targeted environment, the aggregator might need some configuration. As you already know, each aggregator should have its own repository (with the corresponding pair of table created in the database) and a data source. If the default lobHandler is not adapted to your database system, it can be injected with the lobHandler property. Here is the declaration for Oracle: <bean id="lobHandler" class="org.springframework.jdbc.support.lob.OracleLobHandler"> <property name="nativeJdbcExtractor" ref="nativeJdbcExtractor"/> </bean> <bean id="nativeJdbcExtractor" class="org.springframework.jdbc.support.nativejdbc.CommonsDbcpNativeJdbcExtractor"/> <bean id="repo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="transactionManager" ref="transactionManager"/> <property name="repositoryName" value="aggregation"/> <property name="dataSource" ref="dataSource"/> <!-- Only with Oracle, else use default --> <property name="lobHandler" ref="lobHandler"/> </bean> 326.15.4. Optimistic locking From Camel 2.12 onwards you can turn on optimisticLocking and use this JDBC based aggregation repository in a clustered environment where multiple Camel applications shared the same database for the aggregation repository. If there is a race condition there JDBC driver will throw a vendor specific exception which the JdbcAggregationRepository can react upon. To know which caused exceptions from the JDBC driver is regarded as an optimistick locking error we need a mapper to do this. Therefore there is a org.apache.camel.processor.aggregate.jdbc.JdbcOptimisticLockingExceptionMapper allows you to implement your custom logic if needed. There is a default implementation org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper which works as follows: The following check is done: If the caused exception is an SQLException then the SQLState is checked if starts with 23. If the caused exception is a DataIntegrityViolationException If the caused exception class name has "ConstraintViolation" in its name. Optional checking for FQN class name matches if any class names has been configured You can in addition add FQN classnames, and if any of the caused exception (or any nested) equals any of the FQN class names, then its an optimistick locking error. Here is an example, where we define 2 extra FQN class names from the JDBC vendor: <bean id="repo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="transactionManager" ref="transactionManager"/> <property name="repositoryName" value="aggregation"/> <property name="dataSource" ref="dataSource"/> <property name="jdbcOptimisticLockingExceptionMapper" ref="myExceptionMapper"/> </bean> <!-- use the default mapper with extraFQN class names from our JDBC driver --> <bean id="myExceptionMapper" class="org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper"> <property name="classNames"> <util:set> <value>com.foo.sql.MyViolationExceptoion</value> <value>com.foo.sql.MyOtherViolationExceptoion</value> </util:set> </property> </bean> 326.16. Propagation behavior JdbcAggregationRepository uses two distinct transaction templates from Spring-TX. One is read-only and one is used for read-write operations. However, when using JdbcAggregationRepository within a route that itself uses <transacted /> and there's common PlatformTransactionManager used, there may be a need to configure propagation behavior used by transaction templates inside JdbcAggregationRepository . Here's a way to do it: <bean id="repo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="propagationBehaviorName" value="PROPAGATION_NESTED" /> </bean> Propagation is specified by constants of org.springframework.transaction.TransactionDefinition interface, so propagationBehaviorName is convenient setter that allows to use names of the constants. 326.17. PostgreSQL case There's special database that may cause problems with optimistic locking used by JdbcAggregationRepository . PostgreSQL marks connection as invalid in case of data integrity violation exception (the one with SQLState 23505). This makes the connection effectively unusable within nested transaction. Details can be found in this document . org.apache.camel.processor.aggregate.jdbc.PostgresAggregationRepository extends JdbcAggregationRepository and uses special INSERT .. ON CONFLICT .. statement to provide optimistic locking behavior. This statement is (with default aggregation table definition): INSERT INTO aggregation (id, exchange) values (?, ?) ON CONFLICT DO NOTHING Details can be found in PostgreSQL documentation . When this clause is used, java.sql.PreparedStatement.executeUpdate() call returns 0 instead of throwing SQLException with SQLState=23505. Further handling is exactly the same as with generic JdbcAggregationRepository , but without marking PostgreSQL connection as invalid. 326.18. Camel SQL Starter A starter module is available to spring-boot users. When using the starter, the DataSource can be directly configured using spring-boot properties. # Example for a mysql datasource spring.datasource.url=jdbc:mysql://localhost/test spring.datasource.username=dbuser spring.datasource.password=dbpass spring.datasource.driver-class-name=com.mysql.jdbc.Driver To use this feature, add the following dependencies to your spring boot pom.xml file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sql-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> <version>USD{spring-boot-version}</version> </dependency> You should also include the specific database driver, if needed. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sql</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"sql:select * from table where id=# order by name[?options]",
"sql:select * from table where id=:#myId order by name[?options]",
"sql:select * from table where id=:#USD{property.myId} order by name[?options]",
"sql:classpath:sql/myquery.sql[?options]",
"-- this is a comment select * from table where id = :#USD{property.myId} order by name",
"sql:query",
"from(\"jms:order.inbox\") .to(\"sql:select order_seq.nextval from dual?outputHeader=OrderId&outputType=SelectOne\") .to(\"jms:order.booking\");",
"from(\"direct:withSplitModel\") .to(\"sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel\") .to(\"log:stream\") .split(body()).streaming() .to(\"log:row\") .to(\"mock:result\") .end();",
"sql:select * from table where id=# order by name?dataSource=myDS",
"<jee:jndi-lookup id=\"myDS\" jndi-name=\"jdbc/myDataSource\"/>",
"from(\"direct:projects\") .setHeader(\"lic\", constant(\"ASF\")) .setHeader(\"min\", constant(123)) .to(\"sql:select * from projects where license = :#lic and id > :#min order by id\")",
"from(\"direct:projects\") .to(\"sql:select * from projects where license = :#lic and id > :#min order by id\")",
"from(\"direct:projects\") .setBody(constant(\"ASF\")) .setProperty(\"min\", constant(123)) .to(\"sql:select * from projects where license = :#USD{body} and id > :#USD{property.min} order by id\")",
"from(\"sql:select * from projects where id = :#USD{bean:myIdGenerator.nextId}\") .to(\"mock:result\");",
"public static class MyIdGenerator { private int id = 1; public int nextId() { return id++; }",
"-- this is a comment select * from projects where project in (:#in:names) order by id",
"from(\"direct:query\") .to(\"sql:classpath:sql/selectProjectsIn.sql\") .to(\"log:query\") .to(\"mock:query\");",
"// use an array template.requestBodyAndHeader(\"direct:query\", \"Hi there!\", \"names\", new String[]{\"Camel\", \"AMQ\"}); // use a list List<String> names = new ArrayList<String>(); names.add(\"Camel\"); names.add(\"AMQ\"); template.requestBodyAndHeader(\"direct:query\", \"Hi there!\", \"names\", names); // use a string separated values with comma template.requestBodyAndHeader(\"direct:query\", \"Hi there!\", \"names\", \"Camel,AMQ\");",
"from(\"direct:query\") .to(\"sql:select * from projects where project in (:#in:names) order by id\") .to(\"log:query\") .to(\"mock:query\");",
"CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100) )",
"CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100), createdAt TIMESTAMP )",
"CREATE TABLE aggregation ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_pk PRIMARY KEY (id) ); CREATE TABLE aggregation_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, constraint aggregation_completed_pk PRIMARY KEY (id) );",
"CREATE TABLE aggregationRepo3 ( id varchar(255) NOT NULL, exchange blob NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_pk PRIMARY KEY (id) ); CREATE TABLE aggregationRepo3_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_completed_pk PRIMARY KEY (id) );",
"<bean id=\"repo3\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"repositoryName\" value=\"aggregationRepo3\"/> <property name=\"transactionManager\" ref=\"txManager3\"/> <property name=\"dataSource\" ref=\"dataSource3\"/> <!-- configure to store the message body and following headers as text in the repo --> <property name=\"storeBodyAsText\" value=\"true\"/> <property name=\"headersToStoreAsText\"> <list> <value>companyName</value> <value>accountName</value> </list> </property> </bean>",
"<bean id=\"lobHandler\" class=\"org.springframework.jdbc.support.lob.OracleLobHandler\"> <property name=\"nativeJdbcExtractor\" ref=\"nativeJdbcExtractor\"/> </bean> <bean id=\"nativeJdbcExtractor\" class=\"org.springframework.jdbc.support.nativejdbc.CommonsDbcpNativeJdbcExtractor\"/> <bean id=\"repo\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"transactionManager\" ref=\"transactionManager\"/> <property name=\"repositoryName\" value=\"aggregation\"/> <property name=\"dataSource\" ref=\"dataSource\"/> <!-- Only with Oracle, else use default --> <property name=\"lobHandler\" ref=\"lobHandler\"/> </bean>",
"<bean id=\"repo\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"transactionManager\" ref=\"transactionManager\"/> <property name=\"repositoryName\" value=\"aggregation\"/> <property name=\"dataSource\" ref=\"dataSource\"/> <property name=\"jdbcOptimisticLockingExceptionMapper\" ref=\"myExceptionMapper\"/> </bean> <!-- use the default mapper with extraFQN class names from our JDBC driver --> <bean id=\"myExceptionMapper\" class=\"org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper\"> <property name=\"classNames\"> <util:set> <value>com.foo.sql.MyViolationExceptoion</value> <value>com.foo.sql.MyOtherViolationExceptoion</value> </util:set> </property> </bean>",
"<bean id=\"repo\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_NESTED\" /> </bean>",
"INSERT INTO aggregation (id, exchange) values (?, ?) ON CONFLICT DO NOTHING",
"Example for a mysql datasource spring.datasource.url=jdbc:mysql://localhost/test spring.datasource.username=dbuser spring.datasource.password=dbpass spring.datasource.driver-class-name=com.mysql.jdbc.Driver",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sql-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> <version>USD{spring-boot-version}</version> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/sql-component |
16.5. Troubleshooting virt-who | 16.5. Troubleshooting virt-who This section provides information on troubleshooting virt-who . 16.5.1. Why is the hypervisor status red? Scenario: On the server side, you deploy a guest on a hypervisor that does not have a subscription. 24 hours later, the hypervisor displays its status as red. To remedy this situation you must get a subscription for that hypervisor. Or, permanently migrate the guest to a hypervisor with a subscription. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/troubleshooting-virt-who-hypervisor-isssues |
Chapter 15. Infrastructure [config.openshift.io/v1] | Chapter 15. Infrastructure [config.openshift.io/v1] Description Infrastructure holds cluster-wide information about Infrastructure. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 15.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description cloudConfig object cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. platformSpec object platformSpec holds desired information specific to the underlying infrastructure provider. 15.1.2. .spec.cloudConfig Description cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. Type object Property Type Description key string Key allows pointing to a specific key/value inside of the configmap. This is useful for logical file references. name string 15.1.3. .spec.platformSpec Description platformSpec holds desired information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "KubeVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.4. .spec.platformSpec.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object 15.1.5. .spec.platformSpec.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.6. .spec.platformSpec.aws.serviceEndpoints Description serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.7. .spec.platformSpec.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.8. .spec.platformSpec.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object 15.1.9. .spec.platformSpec.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object Property Type Description apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.apiServerInternalIPs will be used. Once set, the list cannot be completely removed (but its second entry can). ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.ingressIPs will be used. Once set, the list cannot be completely removed (but its second entry can). machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. Each network is provided in the CIDR format and should be IPv4 or IPv6, for example "10.0.0.0/8" or "fd00::/8". 15.1.10. .spec.platformSpec.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object 15.1.11. .spec.platformSpec.external Description ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. Type object Property Type Description platformName string PlatformName holds the arbitrary string representing the infrastructure provider name, expected to be set at the installation time. This field is solely for informational and reporting purposes and is not expected to be used for decision-making. 15.1.12. .spec.platformSpec.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object 15.1.13. .spec.platformSpec.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object 15.1.14. .spec.platformSpec.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object 15.1.15. .spec.platformSpec.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Required prismCentral prismElements Property Type Description failureDomains array failureDomains configures failure domains information for the Nutanix platform. When set, the failure domains defined here may be used to spread Machines across prism element clusters to improve fault tolerance of the cluster. failureDomains[] object NutanixFailureDomain configures failure domain information for the Nutanix platform. prismCentral object prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. prismElements array prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. prismElements[] object NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) 15.1.16. .spec.platformSpec.nutanix.failureDomains Description failureDomains configures failure domains information for the Nutanix platform. When set, the failure domains defined here may be used to spread Machines across prism element clusters to improve fault tolerance of the cluster. Type array 15.1.17. .spec.platformSpec.nutanix.failureDomains[] Description NutanixFailureDomain configures failure domain information for the Nutanix platform. Type object Required cluster name subnets Property Type Description cluster object cluster is to identify the cluster (the Prism Element under management of the Prism Central), in which the Machine's VM will be created. The cluster identifier (uuid or name) can be obtained from the Prism Central console or using the prism_central API. name string name defines the unique name of a failure domain. Name is required and must be at most 64 characters in length. It must consist of only lower case alphanumeric characters and hyphens (-). It must start and end with an alphanumeric character. This value is arbitrary and is used to identify the failure domain within the platform. subnets array subnets holds a list of identifiers (one or more) of the cluster's network subnets for the Machine's VM to connect to. The subnet identifiers (uuid or name) can be obtained from the Prism Central console or using the prism_central API. subnets[] object NutanixResourceIdentifier holds the identity of a Nutanix PC resource (cluster, image, subnet, etc.) 15.1.18. .spec.platformSpec.nutanix.failureDomains[].cluster Description cluster is to identify the cluster (the Prism Element under management of the Prism Central), in which the Machine's VM will be created. The cluster identifier (uuid or name) can be obtained from the Prism Central console or using the prism_central API. Type object Required type Property Type Description name string name is the resource name in the PC. It cannot be empty if the type is Name. type string type is the identifier type to use for this resource. uuid string uuid is the UUID of the resource in the PC. It cannot be empty if the type is UUID. 15.1.19. .spec.platformSpec.nutanix.failureDomains[].subnets Description subnets holds a list of identifiers (one or more) of the cluster's network subnets for the Machine's VM to connect to. The subnet identifiers (uuid or name) can be obtained from the Prism Central console or using the prism_central API. Type array 15.1.20. .spec.platformSpec.nutanix.failureDomains[].subnets[] Description NutanixResourceIdentifier holds the identity of a Nutanix PC resource (cluster, image, subnet, etc.) Type object Required type Property Type Description name string name is the resource name in the PC. It cannot be empty if the type is Name. type string type is the identifier type to use for this resource. uuid string uuid is the UUID of the resource in the PC. It cannot be empty if the type is UUID. 15.1.21. .spec.platformSpec.nutanix.prismCentral Description prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.22. .spec.platformSpec.nutanix.prismElements Description prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. Type array 15.1.23. .spec.platformSpec.nutanix.prismElements[] Description NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) Type object Required endpoint name Property Type Description endpoint object endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. name string name is the name of the Prism Element (cluster). This value will correspond with the cluster field configured on other resources (eg Machines, PVCs, etc). 15.1.24. .spec.platformSpec.nutanix.prismElements[].endpoint Description endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.25. .spec.platformSpec.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object Property Type Description apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.apiServerInternalIPs will be used. Once set, the list cannot be completely removed (but its second entry can). ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.ingressIPs will be used. Once set, the list cannot be completely removed (but its second entry can). machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. Each network is provided in the CIDR format and should be IPv4 or IPv6, for example "10.0.0.0/8" or "fd00::/8". 15.1.26. .spec.platformSpec.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object 15.1.27. .spec.platformSpec.powervs Description PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. 15.1.28. .spec.platformSpec.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.29. .spec.platformSpec.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.30. .spec.platformSpec.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.apiServerInternalIPs will be used. Once set, the list cannot be completely removed (but its second entry can). failureDomains array failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. failureDomains[] object VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.ingressIPs will be used. Once set, the list cannot be completely removed (but its second entry can). machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. Each network is provided in the CIDR format and should be IPv4 or IPv6, for example "10.0.0.0/8" or "fd00::/8". nodeNetworking object nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. vcenters array vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported. --- vcenters[] object VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. 15.1.31. .spec.platformSpec.vsphere.failureDomains Description failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. Type array 15.1.32. .spec.platformSpec.vsphere.failureDomains[] Description VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. Type object Required name region server topology zone Property Type Description name string name defines the arbitrary but unique name of a failure domain. region string region defines the name of a region tag that will be attached to a vCenter datacenter. The tag category in vCenter must be named openshift-region. server string server is the fully-qualified domain name or the IP address of the vCenter server. --- topology object Topology describes a given failure domain using vSphere constructs zone string zone defines the name of a zone tag that will be attached to a vCenter cluster. The tag category in vCenter must be named openshift-zone. 15.1.33. .spec.platformSpec.vsphere.failureDomains[].topology Description Topology describes a given failure domain using vSphere constructs Type object Required computeCluster datacenter datastore networks Property Type Description computeCluster string computeCluster the absolute path of the vCenter cluster in which virtual machine will be located. The absolute path is of the form /<datacenter>/host/<cluster>. The maximum length of the path is 2048 characters. datacenter string datacenter is the name of vCenter datacenter in which virtual machines will be located. The maximum length of the datacenter name is 80 characters. datastore string datastore is the absolute path of the datastore in which the virtual machine is located. The absolute path is of the form /<datacenter>/datastore/<datastore> The maximum length of the path is 2048 characters. folder string folder is the absolute path of the folder where virtual machines are located. The absolute path is of the form /<datacenter>/vm/<folder>. The maximum length of the path is 2048 characters. networks array (string) networks is the list of port group network names within this failure domain. Currently, we only support a single interface per RHCOS virtual machine. The available networks (port groups) can be listed using govc ls 'network/*' The single interface should be the absolute path of the form /<datacenter>/network/<portgroup>. resourcePool string resourcePool is the absolute path of the resource pool where virtual machines will be created. The absolute path is of the form /<datacenter>/host/<cluster>/Resources/<resourcepool>. The maximum length of the path is 2048 characters. template string template is the full inventory path of the virtual machine or template that will be cloned when creating new machines in this failure domain. The maximum length of the path is 2048 characters. When omitted, the template will be calculated by the control plane machineset operator based on the region and zone defined in VSpherePlatformFailureDomainSpec. For example, for zone=zonea, region=region1, and infrastructure name=test, the template path would be calculated as /<datacenter>/vm/test-rhcos-region1-zonea. 15.1.34. .spec.platformSpec.vsphere.nodeNetworking Description nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. Type object Property Type Description external object external represents the network configuration of the node that is externally routable. internal object internal represents the network configuration of the node that is routable only within the cluster. 15.1.35. .spec.platformSpec.vsphere.nodeNetworking.external Description external represents the network configuration of the node that is externally routable. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. --- network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. --- 15.1.36. .spec.platformSpec.vsphere.nodeNetworking.internal Description internal represents the network configuration of the node that is routable only within the cluster. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. --- network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. --- 15.1.37. .spec.platformSpec.vsphere.vcenters Description vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported. --- Type array 15.1.38. .spec.platformSpec.vsphere.vcenters[] Description VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. Type object Required datacenters server Property Type Description datacenters array (string) The vCenter Datacenters in which the RHCOS vm guests are located. This field will be used by the Cloud Controller Manager. Each datacenter listed here should be used within a topology. port integer port is the TCP port that will be used to communicate to the vCenter endpoint. When omitted, this means the user has no opinion and it is up to the platform to choose a sensible default, which is subject to change over time. server string server is the fully-qualified domain name or the IP address of the vCenter server. --- 15.1.39. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description apiServerInternalURI string apiServerInternalURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerInternalURL can be used by components like kubelets, to contact the Kubernetes API server using the infrastructure provider rather than Kubernetes networking. apiServerURL string apiServerURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerURL can be used by components like the web console to tell users where to find the Kubernetes API. controlPlaneTopology string controlPlaneTopology expresses the expectations for operands that normally run on control nodes. The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation The 'External' mode indicates that the control plane is hosted externally to the cluster and that its components are not visible within the cluster. cpuPartitioning string cpuPartitioning expresses if CPU partitioning is a currently enabled feature in the cluster. CPU Partitioning means that this cluster can support partitioning workloads to specific CPU Sets. Valid values are "None" and "AllNodes". When omitted, the default value is "None". The default value of "None" indicates that no nodes will be setup with CPU partitioning. The "AllNodes" value indicates that all nodes have been setup with CPU partitioning, and can then be further configured via the PerformanceProfile API. etcdDiscoveryDomain string etcdDiscoveryDomain is the domain used to fetch the SRV records for discovering etcd servers and clients. For more info: https://github.com/etcd-io/etcd/blob/329be66e8b3f9e2e6af83c123ff89297e49ebd15/Documentation/op-guide/clustering.md#dns-discovery deprecated: as of 4.7, this field is no longer set or honored. It will be removed in a future release. infrastructureName string infrastructureName uniquely identifies a cluster with a human friendly name. Once set it should not be changed. Must be of max length 27 and must have only alphanumeric or hyphen characters. infrastructureTopology string infrastructureTopology expresses the expectations for infrastructure services that do not run on control plane nodes, usually indicated by a node selector for a role value other than master . The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation NOTE: External topology mode is not applicable for this field. platform string platform is the underlying infrastructure provider for the cluster. Deprecated: Use platformStatus.type instead. platformStatus object platformStatus holds status information specific to the underlying infrastructure provider. 15.1.40. .status.platformStatus Description platformStatus holds status information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object External contains settings specific to the generic External infrastructure provider. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. This value will be synced with to the status.platform and status.platformStatus.type . Currently this value cannot be changed once set. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.41. .status.platformStatus.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object Required region Property Type Description region string region specifies the region for Alibaba Cloud resources created for the cluster. resourceGroupID string resourceGroupID is the ID of the resource group for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. resourceTags[] object AlibabaCloudResourceTag is the set of tags to add to apply to resources. 15.1.42. .status.platformStatus.alibabaCloud.resourceTags Description resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. Type array 15.1.43. .status.platformStatus.alibabaCloud.resourceTags[] Description AlibabaCloudResourceTag is the set of tags to add to apply to resources. Type object Required key value Property Type Description key string key is the key of the tag. value string value is the value of the tag. 15.1.44. .status.platformStatus.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description region string region holds the default AWS region for new AWS resources created by the cluster. resourceTags array resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. resourceTags[] object AWSResourceTag is a tag to apply to AWS resources created for the cluster. serviceEndpoints array ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.45. .status.platformStatus.aws.resourceTags Description resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. Type array 15.1.46. .status.platformStatus.aws.resourceTags[] Description AWSResourceTag is a tag to apply to AWS resources created for the cluster. Type object Required key value Property Type Description key string key is the key of the tag value string value is the value of the tag. Some AWS service do not support empty values. Since tags are added to resources in many services, the length of the tag value must meet the requirements of all services. 15.1.47. .status.platformStatus.aws.serviceEndpoints Description ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.48. .status.platformStatus.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.49. .status.platformStatus.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object Property Type Description armEndpoint string armEndpoint specifies a URL to use for resource management in non-soverign clouds such as Azure Stack. cloudName string cloudName is the name of the Azure cloud environment which can be used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the value is equal to AzurePublicCloud . networkResourceGroupName string networkResourceGroupName is the Resource Group for network resources like the Virtual Network and Subnets used by the cluster. If empty, the value is same as ResourceGroupName. resourceGroupName string resourceGroupName is the Resource Group for new Azure resources created for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. resourceTags[] object AzureResourceTag is a tag to apply to Azure resources created for the cluster. 15.1.50. .status.platformStatus.azure.resourceTags Description resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. Type array 15.1.51. .status.platformStatus.azure.resourceTags[] Description AzureResourceTag is a tag to apply to Azure resources created for the cluster. Type object Required key value Property Type Description key string key is the key part of the tag. A tag key can have a maximum of 128 characters and cannot be empty. Key must begin with a letter, end with a letter, number or underscore, and must contain only alphanumeric characters and the following special characters _ . - . value string value is the value part of the tag. A tag value can have a maximum of 256 characters and cannot be empty. Value must contain only alphanumeric characters and the following special characters _ + , - . / : ; < = > ? @ . 15.1.52. .status.platformStatus.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for BareMetal deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.53. .status.platformStatus.baremetal.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on BareMetal platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.54. .status.platformStatus.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.55. .status.platformStatus.external Description External contains settings specific to the generic External infrastructure provider. Type object Property Type Description cloudControllerManager object cloudControllerManager contains settings specific to the external Cloud Controller Manager (a.k.a. CCM or CPI). When omitted, new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. 15.1.56. .status.platformStatus.external.cloudControllerManager Description cloudControllerManager contains settings specific to the external Cloud Controller Manager (a.k.a. CCM or CPI). When omitted, new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. Type object Property Type Description state string state determines whether or not an external Cloud Controller Manager is expected to be installed within the cluster. https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#running-cloud-controller-manager Valid values are "External", "None" and omitted. When set to "External", new nodes will be tainted as uninitialized when created, preventing them from running workloads until they are initialized by the cloud controller manager. When omitted or set to "None", new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. 15.1.57. .status.platformStatus.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object Property Type Description projectID string resourceGroupName is the Project ID for new GCP resources created for the cluster. region string region holds the region for new GCP resources created for the cluster. resourceLabels array resourceLabels is a list of additional labels to apply to GCP resources created for the cluster. See https://cloud.google.com/compute/docs/labeling-resources for information on labeling GCP resources. GCP supports a maximum of 64 labels per resource. OpenShift reserves 32 labels for internal use, allowing 32 labels for user configuration. resourceLabels[] object GCPResourceLabel is a label to apply to GCP resources created for the cluster. resourceTags array resourceTags is a list of additional tags to apply to GCP resources created for the cluster. See https://cloud.google.com/resource-manager/docs/tags/tags-overview for information on tagging GCP resources. GCP supports a maximum of 50 tags per resource. resourceTags[] object GCPResourceTag is a tag to apply to GCP resources created for the cluster. 15.1.58. .status.platformStatus.gcp.resourceLabels Description resourceLabels is a list of additional labels to apply to GCP resources created for the cluster. See https://cloud.google.com/compute/docs/labeling-resources for information on labeling GCP resources. GCP supports a maximum of 64 labels per resource. OpenShift reserves 32 labels for internal use, allowing 32 labels for user configuration. Type array 15.1.59. .status.platformStatus.gcp.resourceLabels[] Description GCPResourceLabel is a label to apply to GCP resources created for the cluster. Type object Required key value Property Type Description key string key is the key part of the label. A label key can have a maximum of 63 characters and cannot be empty. Label key must begin with a lowercase letter, and must contain only lowercase letters, numeric characters, and the following special characters _- . Label key must not have the reserved prefixes kubernetes-io and openshift-io . value string value is the value part of the label. A label value can have a maximum of 63 characters and cannot be empty. Value must contain only lowercase letters, numeric characters, and the following special characters _- . 15.1.60. .status.platformStatus.gcp.resourceTags Description resourceTags is a list of additional tags to apply to GCP resources created for the cluster. See https://cloud.google.com/resource-manager/docs/tags/tags-overview for information on tagging GCP resources. GCP supports a maximum of 50 tags per resource. Type array 15.1.61. .status.platformStatus.gcp.resourceTags[] Description GCPResourceTag is a tag to apply to GCP resources created for the cluster. Type object Required key parentID value Property Type Description key string key is the key part of the tag. A tag key can have a maximum of 63 characters and cannot be empty. Tag key must begin and end with an alphanumeric character, and must contain only uppercase, lowercase alphanumeric characters, and the following special characters ._- . parentID string parentID is the ID of the hierarchical resource where the tags are defined, e.g. at the Organization or the Project level. To find the Organization or Project ID refer to the following pages: https://cloud.google.com/resource-manager/docs/creating-managing-organization#retrieving_your_organization_id , https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects . An OrganizationID must consist of decimal numbers, and cannot have leading zeroes. A ProjectID must be 6 to 30 characters in length, can only contain lowercase letters, numbers, and hyphens, and must start with a letter, and cannot end with a hyphen. value string value is the value part of the tag. A tag value can have a maximum of 63 characters and cannot be empty. Tag value must begin and end with an alphanumeric character, and must contain only uppercase, lowercase alphanumeric characters, and the following special characters _-.@%=+:,*#&(){}[] and spaces. 15.1.62. .status.platformStatus.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain location string Location is where the cluster has been deployed providerType string ProviderType indicates the type of cluster that was created resourceGroupName string ResourceGroupName is the Resource Group for new IBMCloud resources created for the cluster. serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of an IBM Cloud service. These endpoints are consumed by components within the cluster to reach the respective IBM Cloud Services. serviceEndpoints[] object IBMCloudServiceEndpoint stores the configuration of a custom url to override existing defaults of IBM Cloud Services. 15.1.63. .status.platformStatus.ibmcloud.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of an IBM Cloud service. These endpoints are consumed by components within the cluster to reach the respective IBM Cloud Services. Type array 15.1.64. .status.platformStatus.ibmcloud.serviceEndpoints[] Description IBMCloudServiceEndpoint stores the configuration of a custom url to override existing defaults of IBM Cloud Services. Type object Required name url Property Type Description name string name is the name of the IBM Cloud service. Possible values are: CIS, COS, COSConfig, DNSServices, GlobalCatalog, GlobalSearch, GlobalTagging, HyperProtect, IAM, KeyProtect, ResourceController, ResourceManager, or VPC. For example, the IBM Cloud Private IAM service could be configured with the service name of IAM and url of https://private.iam.cloud.ibm.com Whereas the IBM Cloud Private VPC service for US South (Dallas) could be configured with the service name of VPC and url of https://us.south.private.iaas.cloud.ibm.com url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.65. .status.platformStatus.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.66. .status.platformStatus.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. 15.1.67. .status.platformStatus.nutanix.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on Nutanix platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.68. .status.platformStatus.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. cloudName string cloudName is the name of the desired OpenStack cloud in the client configuration file ( clouds.yaml ). ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for OpenStack deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.69. .status.platformStatus.openstack.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on OpenStack platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.70. .status.platformStatus.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. nodeDNSIP string deprecated: as of 4.6, this field is no longer set or honored. It will be removed in a future release. 15.1.71. .status.platformStatus.ovirt.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on Ovirt platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.72. .status.platformStatus.powervs Description PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain region string region holds the default Power VS region for new Power VS resources created by the cluster. resourceGroup string resourceGroup is the resource group name for new IBMCloud resources created for a cluster. The resource group specified here will be used by cluster-image-registry-operator to set up a COS Instance in IBMCloud for the cluster registry. More about resource groups can be found here: https://cloud.ibm.com/docs/account?topic=account-rgs . When omitted, the image registry operator won't be able to configure storage, which results in the image registry cluster operator not being in an available state. serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. zone string zone holds the default zone for the new Power VS resources created by the cluster. Note: Currently only single-zone OCP clusters are supported 15.1.73. .status.platformStatus.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.74. .status.platformStatus.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.75. .status.platformStatus.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for vSphere deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.76. .status.platformStatus.vsphere.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on VSphere platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/infrastructures DELETE : delete collection of Infrastructure GET : list objects of kind Infrastructure POST : create an Infrastructure /apis/config.openshift.io/v1/infrastructures/{name} DELETE : delete an Infrastructure GET : read the specified Infrastructure PATCH : partially update the specified Infrastructure PUT : replace the specified Infrastructure /apis/config.openshift.io/v1/infrastructures/{name}/status GET : read status of the specified Infrastructure PATCH : partially update status of the specified Infrastructure PUT : replace status of the specified Infrastructure 15.2.1. /apis/config.openshift.io/v1/infrastructures HTTP method DELETE Description delete collection of Infrastructure Table 15.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Infrastructure Table 15.2. HTTP responses HTTP code Reponse body 200 - OK InfrastructureList schema 401 - Unauthorized Empty HTTP method POST Description create an Infrastructure Table 15.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.4. Body parameters Parameter Type Description body Infrastructure schema Table 15.5. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 202 - Accepted Infrastructure schema 401 - Unauthorized Empty 15.2.2. /apis/config.openshift.io/v1/infrastructures/{name} Table 15.6. Global path parameters Parameter Type Description name string name of the Infrastructure HTTP method DELETE Description delete an Infrastructure Table 15.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 15.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Infrastructure Table 15.9. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Infrastructure Table 15.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.11. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Infrastructure Table 15.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.13. Body parameters Parameter Type Description body Infrastructure schema Table 15.14. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty 15.2.3. /apis/config.openshift.io/v1/infrastructures/{name}/status Table 15.15. Global path parameters Parameter Type Description name string name of the Infrastructure HTTP method GET Description read status of the specified Infrastructure Table 15.16. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Infrastructure Table 15.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.18. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Infrastructure Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.20. Body parameters Parameter Type Description body Infrastructure schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/infrastructure-config-openshift-io-v1 |
24.4. Server Settings | 24.4. Server Settings The Server tab allows you to configure basic server settings. The default settings for these options are appropriate for most situations. Figure 24.10. Server Configuration The Lock File value corresponds to the LockFile directive. This directive sets the path to the lockfile used when the server is compiled with either USE_FCNTL_SERIALIZED_ACCEPT or USE_FLOCK_SERIALIZED_ACCEPT. It must be stored on the local disk. It should be left to the default value unless the logs directory is located on an NFS share. If this is the case, the default value should be changed to a location on the local disk and to a directory that is readable only by root. The PID File value corresponds to the PidFile directive. This directive sets the file in which the server records its process ID (pid). This file should only be readable by root. In most cases, it should be left to the default value. The Core Dump Directory value corresponds to the CoreDumpDirectory directive. The Apache HTTP Server tries to switch to this directory before executing a core dump. The default value is the ServerRoot . However, if the user that the server runs as can not write to this directory, the core dump can not be written. Change this value to a directory writable by the user the server runs as, if you want to write the core dumps to disk for debugging purposes. The User value corresponds to the User directive. It sets the userid used by the server to answer requests. This user's settings determine the server's access. Any files inaccessible to this user are also inaccessible to your website's visitors. The default for User is apache. The user should only have privileges so that it can access files which are supposed to be visible to the outside world. The user is also the owner of any CGI processes spawned by the server. The user should not be allowed to execute any code which is not intended to be in response to HTTP requests. Warning Unless you know exactly what you are doing, do not set the User directive to root. Using root as the User creates large security holes for your Web server. The parent httpd process first runs as root during normal operations, but is then immediately handed off to the apache user. The server must start as root because it needs to bind to a port below 1024. Ports below 1024 are reserved for system use, so they can not be used by anyone but root. Once the server has attached itself to its port, however, it hands the process off to the apache user before it accepts any connection requests. The Group value corresponds to the Group directive. The Group directive is similar to the User directive. Group sets the group under which the server answers requests. The default group is also apache. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/HTTPD_Configuration-Server_Settings |
Chapter 13. Configuring logging | Chapter 13. Configuring logging Red Hat build of Keycloak uses the JBoss Logging framework. The following is a high-level overview for the available log handlers: root console ( default ) file 13.1. Logging configuration Logging is done on a per-category basis in Red Hat build of Keycloak. You can configure logging for the root log level or for more specific categories such as org.hibernate or org.keycloak . This chapter describes how to configure logging. 13.1.1. Log levels The following table defines the available log levels. Level Description FATAL Critical failures with complete inability to serve any kind of request. ERROR A significant error or problem leading to the inability to process requests. WARN A non-critical error or problem that might not require immediate correction. INFO Red Hat build of Keycloak lifecycle events or important information. Low frequency. DEBUG More detailed information for debugging purposes, such as database logs. Higher frequency. TRACE Most detailed debugging information. Very high frequency. ALL Special level for all log messages. OFF Special level to turn logging off entirely (not recommended). 13.1.2. Configuring the root log level When no log level configuration exists for a more specific category logger, the enclosing category is used instead. When there is no enclosing category, the root logger level is used. To set the root log level, enter the following command: bin/kc.[sh|bat] start --log-level=<root-level> Use these guidelines for this command: For <root-level> , supply a level defined in the preceding table. The log level is case-insensitive. For example, you could either use DEBUG or debug . If you were to accidentally set the log level twice, the last occurrence in the list becomes the log level. For example, if you included the syntax --log-level="info,... ,DEBUG,... " , the root logger would be DEBUG . 13.1.3. Configuring category-specific log levels You can set different log levels for specific areas in Red Hat build of Keycloak. Use this command to provide a comma-separated list of categories for which you want a different log level: bin/kc.[sh|bat] start --log-level="<root-level>,<org.category1>:<org.category1-level>" A configuration that applies to a category also applies to its sub-categories unless you include a more specific matching sub-category. Example bin/kc.[sh|bat] start --log-level="INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info" This example sets the following log levels: Root log level for all loggers is set to INFO. The hibernate log level in general is set to debug. To keep SQL abstract syntax trees from creating verbose log output, the specific subcategory org.hibernate.hql.internal.ast is set to info. As a result, the SQL abstract syntax trees are omitted instead of appearing at the debug level. 13.2. Enabling log handlers To enable log handlers, enter the following command: bin/kc.[sh|bat] start --log="<handler1>,<handler2>" The available handlers are console and file . The more specific handler configuration mentioned below will only take effect when the handler is added to this comma-separated list. 13.3. Console log handler The console log handler is enabled by default, providing unstructured log messages for the console. 13.3.1. Configuring the console log format Red Hat build of Keycloak uses a pattern-based logging formatter that generates human-readable text logs by default. The logging format template for these lines can be applied at the root level. The default format template is: %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n The format string supports the symbols in the following table: Symbol Summary Description %% % Renders a simple % character. %c Category Renders the log category name. %d{xxx} Date Renders a date with the given date format string.String syntax defined by java.text.SimpleDateFormat %e Exception Renders a thrown exception. %h Hostname Renders the simple host name. %H Qualified host name Renders the fully qualified hostname, which may be the same as the simple host name, depending on the OS configuration. %i Process ID Renders the current process PID. %m Full Message Renders the log message and an exception, if thrown. %n Newline Renders the platform-specific line separator string. %N Process name Renders the name of the current process. %p Level Renders the log level of the message. %r Relative time Render the time in milliseconds since the start of the application log. %s Simple message Renders only the log message without exception trace. %t Thread name Renders the thread name. %t{id} Thread ID Render the thread ID. %z{<zone name>} Timezone Set the time zone of log output to <zone name>. %L Line number Render the line number of the log message. 13.3.2. Setting the logging format To set the logging format for a logged line, perform these steps: Build your desired format template using the preceding table. Enter the following command: bin/kc.[sh|bat] start --log-console-format="'<format>'" Note that you need to escape characters when invoking commands containing special shell characters such as ; using the CLI. Therefore, consider setting it in the configuration file instead. Example: Abbreviate the fully qualified category name bin/kc.[sh|bat] start --log-console-format="'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'" This example abbreviates the category name to three characters by setting [%c{3.}] in the template instead of the default [%c] . 13.3.3. Configuring JSON or plain console logging By default, the console log handler logs plain unstructured data to the console. To use structured JSON log output instead, enter the following command: bin/kc.[sh|bat] start --log-console-output=json Example Log Message {"timestamp":"2022-02-25T10:31:32.452+01:00","sequence":8442,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"host-name","processName":"QuarkusEntryPoint","processId":36946} When using JSON output, colors are disabled and the format settings set by --log-console-format will not apply. To use unstructured logging, enter the following command: bin/kc.[sh|bat] start --log-console-output=default Example Log Message: 2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080 13.3.4. Colors Colored console log output for unstructured logs is disabled by default. Colors may improve readability, but they can cause problems when shipping logs to external log aggregation systems. To enable or disable color-coded console log output, enter following command: bin/kc.[sh|bat] start --log-console-color=<false|true> 13.4. File logging As an alternative to logging to the console, you can use unstructured logging to a file. 13.4.1. Enable file logging Logging to a file is disabled by default. To enable it, enter the following command: bin/kc.[sh|bat] start --log="console,file" A log file named keycloak.log is created inside the data/log directory of your Red Hat build of Keycloak installation. 13.4.2. Configuring the location and name of the log file To change where the log file is created and the file name, perform these steps: Create a writable directory to store the log file. If the directory is not writable, Red Hat build of Keycloak will start correctly, but it will issue an error and no log file will be created. Enter this command: bin/kc.[sh|bat] start --log="console,file" --log-file=<path-to>/<your-file.log> 13.4.3. Configuring the file handler format To configure a different logging format for the file log handler, enter the following command: bin/kc.[sh|bat] start --log-file-format="<pattern>" See Section 13.3.1, "Configuring the console log format" for more information and a table of the available pattern configuration. 13.5. Relevant options Value log Enable one or more log handlers in a comma-separated list. CLI: --log Env: KC_LOG console , file log-console-color Enable or disable colors when logging to console. CLI: --log-console-color Env: KC_LOG_CONSOLE_COLOR true , false (default) log-console-format The format of unstructured console log entries. If the format has spaces in it, escape the value using "<format>". CLI: --log-console-format Env: KC_LOG_CONSOLE_FORMAT %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-console-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-console-output Env: KC_LOG_CONSOLE_OUTPUT default (default), json log-file Set the log file path and filename. CLI: --log-file Env: KC_LOG_FILE data/log/keycloak.log (default) log-file-format Set a format specific to file log entries. CLI: --log-file-format Env: KC_LOG_FILE_FORMAT %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-file-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-file-output Env: KC_LOG_FILE_OUTPUT default (default), json log-level The log level of the root category or a comma-separated list of individual categories and their levels. For the root category, you don't need to specify a category. CLI: --log-level Env: KC_LOG_LEVEL [info] (default) | [
"bin/kc.[sh|bat] start --log-level=<root-level>",
"bin/kc.[sh|bat] start --log-level=\"<root-level>,<org.category1>:<org.category1-level>\"",
"bin/kc.[sh|bat] start --log-level=\"INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info\"",
"bin/kc.[sh|bat] start --log=\"<handler1>,<handler2>\"",
"bin/kc.[sh|bat] start --log-console-format=\"'<format>'\"",
"bin/kc.[sh|bat] start --log-console-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"",
"bin/kc.[sh|bat] start --log-console-output=json",
"{\"timestamp\":\"2022-02-25T10:31:32.452+01:00\",\"sequence\":8442,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080\",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host-name\",\"processName\":\"QuarkusEntryPoint\",\"processId\":36946}",
"bin/kc.[sh|bat] start --log-console-output=default",
"2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080",
"bin/kc.[sh|bat] start --log-console-color=<false|true>",
"bin/kc.[sh|bat] start --log=\"console,file\"",
"bin/kc.[sh|bat] start --log=\"console,file\" --log-file=<path-to>/<your-file.log>",
"bin/kc.[sh|bat] start --log-file-format=\"<pattern>\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/logging- |
Chapter 7. Installation configuration parameters for the Agent-based Installer | Chapter 7. Installation configuration parameters for the Agent-based Installer Before you deploy an OpenShift Container Platform cluster using the Agent-based Installer, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml and agent-config.yaml files, you must provide values for the required parameters, and you can use the optional parameters to customize your cluster further. 7.1. Available installation configuration parameters The following tables specify the required and optional installation configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the install-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: baremetal , external , none , or vsphere . Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 Required if you use networking.clusterNetwork . An IP address block. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. baremetal , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. baremetal , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 7.1.4. Additional bare metal configuration parameters for the Agent-based Installer Additional bare metal installation configuration parameters for the Agent-based Installer are described in the following table: Note These fields are not used during the initial provisioning of the cluster, but they are available to use once the cluster has been installed. Configuring these fields at install time eliminates the need to set them as a Day 2 operation. Table 7.4. Additional bare metal parameters Parameter Description Values The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 or 2620:52:0:1307::3 . IPv4 or IPv6 address. The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Managed : Default. Set this parameter to Managed to fully manage the provisioning network, including DHCP, TFTP, and so on. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you can use only virtual media based provisioning on Day 2. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled, you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed or Disabled . The MAC address within the cluster where provisioning services run. MAC address. The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. Valid CIDR, for example 10.0.0.0/16 . The name of the network interface on nodes connected to the provisioning network. Use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. String. Defines the IP range for nodes on the provisioning network, for example 172.22.0.10,172.22.0.254 . IP address range. Configuration for bare metal hosts. Array of host configuration objects. The name of the host. String. The MAC address of the NIC used for provisioning the host. MAC address. Configuration for the host to connect to the baseboard management controller (BMC). Dictionary of BMC configuration objects. The username for the BMC. String. Password for the BMC. String. The URL for communicating with the host's BMC controller. The address configuration setting specifies the protocol. For example, redfish+http://10.10.10.1:8000/redfish/v1/Systems/1234 enables Redfish. For more information, see "BMC addressing" in the "Deploying installer-provisioned clusters on bare metal" section. URL. redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. Boolean. 7.1.5. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 7.5. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. An array of failure domain configuration objects. The name of the failure domain. String If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the openshift-region tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String The path to the vSphere compute cluster. String Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the vcenters field. String The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . String Specifies the absolute path to a pre-existing Red Hat Enterprise Linux CoreOS (RHCOS) image template or virtual machine. The installation program can use the image template or virtual machine to quickly install RHCOS on vSphere hosts. Consider using this parameter as an alternative to uploading an RHCOS image on vSphere hosts. This parameter is available for use only on installer-provisioned infrastructure. String Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 7.1.6. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter: Table 7.6. Deprecated VMware vSphere cluster parameters Parameter Description Values The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String Additional resources BMC addressing Configuring regions and zones for a VMware vCenter Required vCenter account privileges 7.2. Available Agent configuration parameters The following tables specify the required and optional Agent configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the agent-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 7.2.1. Required configuration parameters Required Agent configuration parameters are described in the following table: Table 7.7. Required parameters Parameter Description Values The API version for the agent-config.yaml content. The current version is v1beta1 . The installation program might also support older API versions. String Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . The value entered in the agent-config.yaml file is ignored, and instead the value specified in the install-config.yaml file is used. When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters and hyphens ( - ), such as dev . 7.2.2. Optional configuration parameters Optional Agent configuration parameters are described in the following table: Table 7.8. Optional parameters Parameter Description Values The IP address of the node that performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . IPv4 or IPv6 address. The URL of the server to upload Preboot Execution Environment (PXE) assets to when using the Agent-based Installer to generate an iPXE script. For more information, see "Preparing PXE assets for OpenShift Container Platform". String. A list of Network Time Protocol (NTP) sources to be added to all cluster hosts, which are added to any NTP sources that are configured through other means. List of hostnames or IP addresses. Host configuration. An optional list of hosts. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. An array of host configuration objects. Hostname. Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods, although configuring a hostname through this parameter is optional. String. Provides a table of the name and MAC address mappings for the interfaces on the host. If a NetworkConfig section is provided in the agent-config.yaml file, this table must be included and the values must match the mappings provided in the NetworkConfig section. An array of host configuration objects. The name of an interface on the host. String. The MAC address of an interface on the host. A MAC address such as the following example: 00-B0-D0-63-C2-26 . Defines whether the host is a master or worker node. If no role is defined in the agent-config.yaml file, roles will be assigned at random during cluster installation. master or worker . Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. This is the device that the operating system is written on during installation. A dictionary of key-value pairs. For more information, see "Root device hints" in the "Setting up the environment for an OpenShift installation" page. The name of the device the RHCOS image is provisioned to. String. The host network definition. The configuration must match the Host Network Management API defined in the nmstate documentation . A dictionary of host network configuration objects. Additional resources Preparing PXE assets for OpenShift Container Platform Root device hints | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"platform: baremetal: clusterProvisioningIP:",
"platform: baremetal: provisioningNetwork:",
"platform: baremetal: provisioningMACAddress:",
"platform: baremetal: provisioningNetworkCIDR:",
"platform: baremetal: provisioningNetworkInterface:",
"platform: baremetal: provisioningDHCPRange:",
"platform: baremetal: hosts:",
"platform: baremetal: hosts: name:",
"platform: baremetal: hosts: bootMACAddress:",
"platform: baremetal: hosts: bmc:",
"platform: baremetal: hosts: bmc: username:",
"platform: baremetal: hosts: bmc: password:",
"platform: baremetal: hosts: bmc: address:",
"platform: baremetal: hosts: bmc: disableCertificateVerification:",
"platform: vsphere:",
"platform: vsphere: failureDomains:",
"platform: vsphere: failureDomains: name:",
"platform: vsphere: failureDomains: region:",
"platform: vsphere: failureDomains: server:",
"platform: vsphere: failureDomains: zone:",
"platform: vsphere: failureDomains: topology: computeCluster:",
"platform: vsphere: failureDomains: topology: datacenter:",
"platform: vsphere: failureDomains: topology: datastore:",
"platform: vsphere: failureDomains: topology: folder:",
"platform: vsphere: failureDomains: topology: networks:",
"platform: vsphere: failureDomains: topology: resourcePool:",
"platform: vsphere: failureDomains: topology template:",
"platform: vsphere: vcenters:",
"platform: vsphere: vcenters: datacenters:",
"platform: vsphere: vcenters: password:",
"platform: vsphere: vcenters: port:",
"platform: vsphere: vcenters: server:",
"platform: vsphere: vcenters: user:",
"platform: vsphere: cluster:",
"platform: vsphere: datacenter:",
"platform: vsphere: defaultDatastore:",
"platform: vsphere: folder:",
"platform: vsphere: password:",
"platform: vsphere: resourcePool:",
"platform: vsphere: username:",
"platform: vsphere: vCenter:",
"apiVersion:",
"metadata:",
"metadata: name:",
"rendezvousIP:",
"bootArtifactsBaseURL:",
"additionalNTPSources:",
"hosts:",
"hosts: hostname:",
"hosts: interfaces:",
"hosts: interfaces: name:",
"hosts: interfaces: macAddress:",
"hosts: role:",
"hosts: rootDeviceHints:",
"hosts: rootDeviceHints: deviceName:",
"hosts: networkConfig:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installation-config-parameters-agent |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/4.14_release_notes/making-open-source-more-inclusive |
Chapter 7. Forwarding logs to external third-party logging systems | Chapter 7. Forwarding logs to external third-party logging systems By default, the logging subsystem sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder. To send logs to other log aggregators, you use the OpenShift Container Platform Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization. Note To send audit logs to the default internal Elasticsearch log store, use the Cluster Log Forwarder as described in Forward audit logs to the log store . When you forward logs externally, the logging subsystem creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator. Important You cannot use the config map methods and the Cluster Log Forwarder in the same cluster. 7.1. About forwarding logs to third-party systems To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object. output The destination for log data that you define, or where you want the logs sent. An output can be one of the following types: elasticsearch . An external Elasticsearch instance. The elasticsearch output can use a TLS connection. fluentdForward . An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The fluentForward output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS. syslog . An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The syslog output can use a UDP, TCP, or TLS connection. cloudwatch . Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). loki . Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. kafka . A Kafka broker. The kafka output can use a TCP or TLS connection. default . The internal OpenShift Container Platform Elasticsearch instance. You are not required to configure the default output. If you do configure a default output, you receive an error message because the default output is reserved for the Red Hat OpenShift Logging Operator. pipeline Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following: application . Container logs generated by user applications running in the cluster, except infrastructure container applications. infrastructure . Container logs from pods that run in the openshift* , kube* , or default projects and journal logs sourced from node file system. audit . Audit logs generated by the node audit system, auditd , Kubernetes API server, OpenShift API server, and OVN network. You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message. input Forwards the application logs associated with a specific project to a pipeline. In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter. Secret A key:value map that contains confidential data such as user credentials. Note the following: If a ClusterLogForwarder CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the default output. By default, the logging subsystem sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API. If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped. You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols. The internal OpenShift Container Platform Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. The logging subsystem does not comply with those regulations. The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs project to the internal Elasticsearch instance. Sample log forwarding outputs and pipelines apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: "elasticsearch" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: "kafka" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default parse: json 8 labels: secure: "true" 9 datacenter: "east" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: "west" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: "south" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Configuration for an secure Elasticsearch output using a secret with a secure URL. A name to describe the output. The type of output: elasticsearch . The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. The secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project. 4 Configuration for an insecure Elasticsearch output: A name to describe the output. The type of output: elasticsearch . The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. 5 Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL A name to describe the output. The type of output: kafka . Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix. 6 Configuration for an input to filter application logs from the my-project namespace. 7 Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance: A name to describe the pipeline. The inputRefs is the log type, in this example audit . The outputRefs is the name of the output to use, in this example elasticsearch-secure to forward to the secure Elasticsearch instance and default to forward to the internal Elasticsearch instance. Optional: Labels to add to the logs. 8 Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x . 9 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 10 Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. 11 Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance. A name to describe the pipeline. The inputRefs is a specific input: my-app-logs . The outputRefs is default . Optional: String. One or more labels to add to the logs. 12 Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name: The inputRefs is the log type, in this example application . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Fluentd log handling when the external log aggregator is unavailable If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods. Supported Authorization Keys Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations. Transport Layer Security (TLS) Using a TLS URL ('http://... ' or 'ssl://... ') without a Secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a Secret and setting the following optional fields: tls.crt : (string) File name containing a client certificate. Enables mutual authentication. Requires tls.key . tls.key : (string) File name containing the private key to unlock the client certificate. Requires tls.crt . passphrase : (string) Passphrase to decode an encoded TLS private key. Requires tls.key . ca-bundle.crt : (string) File name of a customer CA for server authentication. Username and Password username : (string) Authentication user name. Requires password . password : (string) Authentication password. Requires username . Simple Authentication Security Layer (SASL) sasl.enable (boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other sasl. keys are set. sasl.mechanisms : (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used. sasl.allow-insecure : (boolean) Allow mechanisms that send clear-text passwords. Defaults to false. 7.1.1. Creating a Secret You can create a secret in the directory that contains your certificate and key files by using the following command: Note Generic or opaque secrets are recommended for best results. 7.2. Supported log data output types in OpenShift Logging 5.1 Red Hat OpenShift Logging 5.1 provides the following output types and protocols for sending log data to target log collectors. Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols. Output types Protocols Tested with elasticsearch elasticsearch Elasticsearch 6.8.1 Elasticsearch 6.8.4 Elasticsearch 7.12.2 fluentdForward fluentd forward v1 fluentd 1.7.4 logstash 7.10.1 kafka kafka 0.11 kafka 2.4.1 kafka 2.7.0 syslog RFC-3164, RFC-5424 rsyslog-8.39.0 Note Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424. 7.3. Supported log data output types in OpenShift Logging 5.2 Red Hat OpenShift Logging 5.2 provides the following output types and protocols for sending log data to target log collectors. Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols. Output types Protocols Tested with Amazon CloudWatch REST over HTTPS The current version of Amazon CloudWatch elasticsearch elasticsearch Elasticsearch 6.8.1 Elasticsearch 6.8.4 Elasticsearch 7.12.2 fluentdForward fluentd forward v1 fluentd 1.7.4 logstash 7.10.1 Loki REST over HTTP and HTTPS Loki 2.3.0 deployed on OCP and Grafana labs kafka kafka 0.11 kafka 2.4.1 kafka 2.7.0 syslog RFC-3164, RFC-5424 rsyslog-8.39.0 7.4. Supported log data output types in OpenShift Logging 5.3 Red Hat OpenShift Logging 5.3 provides the following output types and protocols for sending log data to target log collectors. Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols. Output types Protocols Tested with Amazon CloudWatch REST over HTTPS The current version of Amazon CloudWatch elasticsearch elasticsearch Elasticsearch 7.10.1 fluentdForward fluentd forward v1 fluentd 1.7.4 logstash 7.10.1 Loki REST over HTTP and HTTPS Loki 2.2.1 deployed on OCP kafka kafka 0.11 kafka 2.7.0 syslog RFC-3164, RFC-5424 rsyslog-8.39.0 7.5. Supported log data output types in OpenShift Logging 5.4 Red Hat OpenShift Logging 5.4 provides the following output types and protocols for sending log data to target log collectors. Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols. Output types Protocols Tested with Amazon CloudWatch REST over HTTPS The current version of Amazon CloudWatch elasticsearch elasticsearch Elasticsearch 7.10.1 fluentdForward fluentd forward v1 fluentd 1.14.5 logstash 7.10.1 Loki REST over HTTP and HTTPS Loki 2.2.1 deployed on OCP kafka kafka 0.11 kafka 2.7.0 syslog RFC-3164, RFC-5424 rsyslog-8.39.0 7.6. Supported log data output types in OpenShift Logging 5.5 Red Hat OpenShift Logging 5.5 provides the following output types and protocols for sending log data to target log collectors. Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols. Output types Protocols Tested with Amazon CloudWatch REST over HTTPS The current version of Amazon CloudWatch elasticsearch elasticsearch Elasticsearch 7.10.1 fluentdForward fluentd forward v1 fluentd 1.14.6 logstash 7.10.1 Loki REST over HTTP and HTTPS Loki 2.5.0 deployed on OCP kafka kafka 0.11 kafka 2.7.0 syslog RFC-3164, RFC-5424 rsyslog-8.39.0 7.7. Supported log data output types in OpenShift Logging 5.6 Red Hat OpenShift Logging 5.6 provides the following output types and protocols for sending log data to target log collectors. Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols. Output types Protocols Tested with Amazon CloudWatch REST over HTTPS The current version of Amazon CloudWatch elasticsearch elasticsearch Elasticsearch 6.8.23 Elasticsearch 7.10.1 Elasticsearch 8.6.1 fluentdForward fluentd forward v1 fluentd 1.14.6 logstash 7.10.1 Loki REST over HTTP and HTTPS Loki 2.5.0 deployed on OCP kafka kafka 0.11 kafka 2.7.0 syslog RFC-3164, RFC-5424 rsyslog-8.39.0 Important Fluentd doesn't support Elasticsearch 8 as of 5.6.2. Vector doesn't support fluentd/logstash/rsyslog before 5.7.0. 7.8. Forwarding logs to an external Elasticsearch instance You can optionally forward logs to an external Elasticsearch instance in addition to, or instead of, the internal OpenShift Container Platform Elasticsearch instance. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform. To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection. To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default output to forward logs to the internal instance. You do not need to create a default output. If you do configure a default output, you receive an error message because the default output is reserved for the Red Hat OpenShift Logging Operator. Note If you want to forward logs to only the internal OpenShift Container Platform Elasticsearch instance, you do not need to create a ClusterLogForwarder CR. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-insecure 3 type: "elasticsearch" 4 url: http://elasticsearch.insecure.com:9200 5 - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch.secure.com:9200 6 secret: name: es-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - elasticsearch-secure 10 - default 11 parse: json 12 labels: myLabel: "myValue" 13 - name: infrastructure-audit-logs 14 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: logs: "audit-infra" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the elasticsearch type. 5 Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. 6 For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret . 7 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt , tls.key , and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." 8 Optional: Specify a name for the pipeline. 9 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 10 Specify the name of the output to use when forwarding logs with this pipeline. 11 Optional: Specify the default output to send the logs to the internal Elasticsearch instance. 12 Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x . 13 Optional: String. One or more labels to add to the logs. 14 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <file-name>.yaml Example: Setting a secret that contains a username and password You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. Create a Secret YAML file similar to the following example. Use base64-encoded values for the username and password fields. The secret type is opaque by default. apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: dGVzdHVzZXJuYW1lCg== password: dGVzdHBhc3N3b3JkCg== Create the secret: USD oc create secret -n openshift-logging openshift-test-secret.yaml Specify the name of the secret in the ClusterLogForwarder CR: kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret Note In the value of the url field, the prefix can be http or https . Create the CR object: USD oc create -f <file-name>.yaml 7.9. Forwarding logs using the Fluentd forward protocol You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform. To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection. Note Alternately, you can use a config map to forward logs using the forward protocols. However, this method is deprecated in OpenShift Container Platform and will be removed in a future release. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 parse: json 11 labels: clusterId: "C1234" 12 - name: forward-to-fluentd-insecure 13 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the fluentdForward type. 5 Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt , tls.key , and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." 7 Optional: Specify a name for the pipeline. 8 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 9 Specify the name of the output to use when forwarding logs with this pipeline. 10 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 11 Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x . 12 Optional: String. One or more labels to add to the logs. 13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <file-name>.yaml 7.9.1. Enabling nanosecond precision for Logstash to ingest data from fluentd For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file. Procedure In the Logstash configuration file, set nanosecond_precision to true . Example Logstash configuration file input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } } 7.10. Forwarding logs using the syslog protocol You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform. To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection. Note Alternately, you can use a config map to forward logs using the syslog RFC3164 protocols. However, this method is deprecated in OpenShift Container Platform and will be removed in a future release. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 parse: json 12 labels: secure: "true" 13 syslog: "east" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: "west" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the syslog type. 5 Optional: Specify the syslog parameters, listed below. 6 Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 7 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt , tls.key , and ca-bundle.crt that point to the respective certificates that they represent. 8 Optional: Specify a name for the pipeline. 9 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 10 Specify the name of the output to use when forwarding logs with this pipeline. 11 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 12 Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x . 13 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 14 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <file-name>.yaml 7.10.1. Adding log source information to message output You can add namespace_name , pod_name , and container_name elements to the message field of the record by adding the AddLogSource field to your ClusterLogForwarder custom resource (CR). spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout Note This configuration is compatible with both RFC3164 and RFC5424. Example syslog message output without AddLogSource <15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56} Example syslog message output with AddLogSource <15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76} 7.10.2. Syslog parameters You can configure the following for the syslog outputs. For more information, see the syslog RFC3164 or RFC5424 RFC. facility: The syslog facility . The value can be a decimal integer or a case-insensitive keyword: 0 or kern for kernel messages 1 or user for user-level messages, the default. 2 or mail for the mail system 3 or daemon for system daemons 4 or auth for security/authentication messages 5 or syslog for messages generated internally by syslogd 6 or lpr for the line printer subsystem 7 or news for the network news subsystem 8 or uucp for the UUCP subsystem 9 or cron for the clock daemon 10 or authpriv for security authentication messages 11 or ftp for the FTP daemon 12 or ntp for the NTP subsystem 13 or security for the syslog audit log 14 or console for the syslog alert log 15 or solaris-cron for the scheduling daemon 16 - 23 or local0 - local7 for locally used facilities Optional: payloadKey : The record field to use as payload for the syslog message. Note Configuring the payloadKey parameter prevents other parameters from being forwarded to the syslog. rfc: The RFC to be used for sending logs using syslog. The default is RFC5424. severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword: 0 or Emergency for messages indicating the system is unusable 1 or Alert for messages indicating action must be taken immediately 2 or Critical for messages indicating critical conditions 3 or Error for messages indicating error conditions 4 or Warning for messages indicating warning conditions 5 or Notice for messages indicating normal but significant conditions 6 or Informational for messages indicating informational messages 7 or Debug for messages indicating debug-level messages, the default tag: Tag specifies a record field to use as a tag on the syslog message. trimPrefix: Remove the specified prefix from the tag. 7.10.3. Additional RFC5424 syslog parameters The following parameters apply to RFC5424: appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424 . msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424 . procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424 . 7.11. Forwarding logs to Amazon CloudWatch You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default logging subsystem managed Elasticsearch log store. To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output. Procedure Create a Secret YAML file that uses the aws_access_key_id and aws_secret_access_key fields to specify your base64-encoded AWS credentials. For example: apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= Create the secret. For example: USD oc apply -f cw-secret.yaml Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the name of the secret. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: cw-secret 8 pipelines: - name: infra-logs 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the cloudwatch type. 5 Optional: Specify how to group the logs: logType creates log groups for each log type namespaceName creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs. namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs. 6 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. 7 Specify the AWS region. 8 Specify the name of the secret that contains your AWS credentials. 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. Create the CR object: USD oc create -f <file-name>.yaml Example: Using ClusterLogForwarder with Amazon CloudWatch Here, you see an example ClusterLogForwarder custom resource (CR) and the log data that it outputs to Amazon CloudWatch. Suppose that you are running an OpenShift Container Platform cluster named mycluster . The following command returns the cluster's infrastructureName , which you will use to compose aws commands later on: USD oc get Infrastructure/cluster -ojson | jq .status.infrastructureName "mycluster-7977k" To generate log data for this example, you run a busybox pod in a namespace called app . The busybox pod writes a message to stdout every three seconds: USD oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done' USD oc logs -f busybox My life is my message My life is my message My life is my message ... You can look up the UUID of the app namespace where the busybox pod runs: USD oc get ns/app -ojson | jq .metadata.uid "794e1e1a-b9f5-4958-a190-e76a9b53d7bf" In your ClusterLogForwarder custom resource (CR), you configure the infrastructure , audit , and application log types as inputs to the all-logs pipeline. You also connect this pipeline to cw output, which forwards the logs to a CloudWatch instance in the us-east-2 region: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw Each region in CloudWatch contains three levels of objects: log group log stream log event With groupBy: logType in the ClusterLogForwarding CR, the three log types in the inputRefs produce three log groups in Amazon Cloudwatch: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.application" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" Each of the log groups contains log streams: USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName "kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log" USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log" ... USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log" ... Each log stream contains log events. To see a log event from the busybox Pod, you specify its log stream from the application log group: USD aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { "events": [ { "timestamp": 1629422704178, "message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}", "ingestionTime": 1629422744016 }, ... Example: Customizing the prefix in log group names In the log group names, you can replace the default infrastructureName prefix, mycluster-7977k , with an arbitrary string like demo-group-prefix . To make this change, you update the groupPrefix field in the ClusterLogForwarding CR: cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2 The value of groupPrefix replaces the default infrastructureName prefix: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "demo-group-prefix.application" "demo-group-prefix.audit" "demo-group-prefix.infrastructure" Example: Naming log groups after application namespace names For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace. If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before. If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead. To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy field to namespaceName in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceName region: us-east-2 Setting groupBy to namespaceName affects the application log group only. It does not affect the audit and infrastructure log groups. In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.app log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.app" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace. The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. Example: Naming log groups after application namespace UUIDs For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace. If you delete an application namespace object and create a new one, CloudWatch creates a new log group. If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead. To name log groups after application namespace UUIDs, you set the value of the groupBy field to namespaceUUID in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceUUID region: us-east-2 In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace "mycluster-7977k.audit" "mycluster-7977k.infrastructure" The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. 7.12. Forwarding logs to Loki You can forward logs to an external Loki logging system in addition to, or instead of, the internal default OpenShift Container Platform Elasticsearch instance. To configure log forwarding to Loki, you must create a ClusterLogForwarder custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection. Prerequisites You must have a Loki logging system running at the URL you specify with the url field in the CR. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: loki-insecure 3 type: "loki" 4 url: http://loki.insecure.com:3100 5 - name: loki-secure type: "loki" url: https://loki.secure.com:3100 6 secret: name: loki-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - loki-secure 10 loki: tenantKey: kubernetes.namespace_name 11 labelKeys: kubernetes.labels.foo 12 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the type as "loki" . 5 Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. 6 For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret . 7 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt , tls.key , and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." 8 Optional: Specify a name for the pipeline. 9 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 10 Specify the name of the output to use when forwarding logs with this pipeline. 11 Optional: Specify a meta-data key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespace_name uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. 12 Optional: Specify a list of meta-data field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]* . Illegal characters in meta-data keys are replaced with _ to form the label name. For example, the kubernetes.labels.foo meta-data key becomes Loki label kubernetes_labels_foo . If you do not set labelKeys , the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host] . Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config . You can still query based on any log record field using query filters. Note Because Loki requires log streams to be correctly ordered by timestamp, labelKeys always includes the kubernetes_host label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts. Create the CR object: USD oc create -f <file-name>.yaml 7.12.1. Troubleshooting Loki "entry out of order" errors If your Fluentd forwards a large block of messages to a Loki logging system that exceeds the rate limit, Loki to generates "entry out of order" errors. To fix this issue, you update some values in the Loki server configuration file, loki.yaml . Note loki.yaml is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The ClusterLogForwarder custom resource is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki, such as: When you enter oc logs -c fluentd , the Fluentd logs in your OpenShift Logging cluster show the following messages: 429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)' When you open the logs on the Loki server, they display entry out of order messages like these: ,\nentry with timestamp 2021-08-18 05:58:55.061936 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\"flush_thread_0\", log_type=\"audit\"},\nentry with timestamp 2021-08-18 06:01:18.290229 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread="flush_thread_0", log_type="audit"} Procedure Update the following fields in the loki.yaml configuration file on the Loki server with the values shown here: grpc_server_max_recv_msg_size: 8388608 chunk_target_size: 8388608 ingestion_rate_mb: 8 ingestion_burst_size_mb: 16 Apply the changes in loki.yaml to the Loki server. Example loki.yaml file auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 grpc_server_max_recv_msg_size: 8388608 ingester: wal: enabled: true dir: /tmp/wal lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed chunk_target_size: 8388608 max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m) max_transfer_retries: 0 # Chunk transfers disabled schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h storage_config: boltdb_shipper: active_index_directory: /tmp/loki/boltdb-shipper-active cache_location: /tmp/loki/boltdb-shipper-cache cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space shared_store: filesystem filesystem: directory: /tmp/loki/chunks compactor: working_directory: /tmp/loki/boltdb-shipper-compactor shared_store: filesystem limits_config: reject_old_samples: true reject_old_samples_max_age: 12h ingestion_rate_mb: 8 ingestion_burst_size_mb: 16 chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: false retention_period: 0s ruler: storage: type: local local: directory: /tmp/loki/rules rule_path: /tmp/loki/rules-temp alertmanager_url: http://localhost:9093 ring: kvstore: store: inmemory enable_api: true Additional resources Configuring Loki Additional resources Log Record Fields . Configuring Loki server 7.13. Forwarding application logs from specific projects You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform. To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: forward-to-fluentd-insecure 8 inputRefs: 9 - my-app-logs outputRefs: 10 - fluentd-server-insecure parse: json 11 labels: project: "my-project" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the output type: elasticsearch , fluentdForward , syslog , or kafka . 5 Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt , tls.key , and ca-bundle.crt keys that each point to the certificates they represent. 7 Configuration for an input to filter application logs from the specified projects. 8 Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance. 9 The my-app-logs input. 10 The name of the output to use. 11 Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x . 12 Optional: String. One or more labels to add to the logs. 13 Configuration for a pipeline to send logs to other log aggregators. Optional: Specify a name for the pipeline. Specify which log types to forward by using the pipeline: application, infrastructure , or audit . Specify the name of the output to use when forwarding logs with this pipeline. Optional: Specify the default output to forward logs to the internal Elasticsearch instance. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <file-name>.yaml 7.14. Forwarding application logs from specific pods As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector. Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector. To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels , as shown in the following example. Example ClusterLogForwarder CR YAML file apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 parse: json 5 inputs: 6 - name: myAppLogData application: selector: matchLabels: 7 environment: production app: nginx namespaces: 8 - app1 - app2 outputs: 9 - default ... 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify one or more comma-separated values from inputs[].name . 4 Specify one or more comma-separated values from outputs[] . 5 Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x . 6 Define a unique inputs[].name for each application that has a unique set of pod labels. 7 Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. 8 Optional: Specify one or more namespaces. 9 Specify one or more outputs to forward your log data to. The optional default output shown here sends log data to the internal Elasticsearch instance. Optional: To restrict the gathering of log data to specific namespaces, use inputs[].name.application.namespaces , as shown in the preceding example. Optional: You can send log data from additional applications that have different pod labels to the same pipeline. For each unique combination of pod labels, create an additional inputs[].name section similar to the one shown. Update the selectors to match the pod labels of this application. Add the new inputs[].name value to inputRefs . For example: Create the CR object: USD oc create -f <file-name>.yaml Additional resources For more information on matchLabels in Kubernetes, see Resources that support set-based requirements . Additional resources Network policy audit logging 7.15. Troubleshooting log forwarding When you create a ClusterLogForwarder custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy. Prerequisites You have created a ClusterLogForwarder custom resource (CR) object. Procedure Delete the Fluentd pods to force them to redeploy. USD oc delete pod --selector logging-infra=collector | [
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default parse: json 8 labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"oc create secret generic -n openshift-logging <my-secret> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-insecure 3 type: \"elasticsearch\" 4 url: http://elasticsearch.insecure.com:9200 5 - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 6 secret: name: es-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - elasticsearch-secure 10 - default 11 parse: json 12 labels: myLabel: \"myValue\" 13 - name: infrastructure-audit-logs 14 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: logs: \"audit-infra\"",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: dGVzdHVzZXJuYW1lCg== password: dGVzdHBhc3N3b3JkCg==",
"oc create secret -n openshift-logging openshift-test-secret.yaml",
"kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 parse: json 11 labels: clusterId: \"C1234\" 12 - name: forward-to-fluentd-insecure 13 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 parse: json 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <file-name>.yaml",
"spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}",
"<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}",
"apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=",
"oc apply -f cw-secret.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: cw-secret 8 pipelines: - name: infra-logs 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11",
"oc create -f <file-name>.yaml",
"oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"",
"oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message",
"oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"",
"aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },",
"cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"",
"cloudwatch: groupBy: namespaceName region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"cloudwatch: groupBy: namespaceUUID region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: loki-insecure 3 type: \"loki\" 4 url: http://loki.insecure.com:3100 5 - name: loki-secure type: \"loki\" url: https://loki.secure.com:3100 6 secret: name: loki-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - loki-secure 10 loki: tenantKey: kubernetes.namespace_name 11 labelKeys: kubernetes.labels.foo 12",
"oc create -f <file-name>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)'",
",\\nentry with timestamp 2021-08-18 05:58:55.061936 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\\\"flush_thread_0\\\", log_type=\\\"audit\\\"},\\nentry with timestamp 2021-08-18 06:01:18.290229 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\"flush_thread_0\", log_type=\"audit\"}",
"auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 grpc_server_max_recv_msg_size: 8388608 ingester: wal: enabled: true dir: /tmp/wal lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed chunk_target_size: 8388608 max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m) max_transfer_retries: 0 # Chunk transfers disabled schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h storage_config: boltdb_shipper: active_index_directory: /tmp/loki/boltdb-shipper-active cache_location: /tmp/loki/boltdb-shipper-cache cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space shared_store: filesystem filesystem: directory: /tmp/loki/chunks compactor: working_directory: /tmp/loki/boltdb-shipper-compactor shared_store: filesystem limits_config: reject_old_samples: true reject_old_samples_max_age: 12h ingestion_rate_mb: 8 ingestion_burst_size_mb: 16 chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: false retention_period: 0s ruler: storage: type: local local: directory: /tmp/loki/rules rule_path: /tmp/loki/rules-temp alertmanager_url: http://localhost:9093 ring: kvstore: store: inmemory enable_api: true",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: forward-to-fluentd-insecure 8 inputRefs: 9 - my-app-logs outputRefs: 10 - fluentd-server-insecure parse: json 11 labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 parse: json 5 inputs: 6 - name: myAppLogData application: selector: matchLabels: 7 environment: production app: nginx namespaces: 8 - app1 - app2 outputs: 9 - default",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=collector"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/cluster-logging-external |
10.5.17. Group | 10.5.17. Group Specifies the group name of the Apache HTTP Server processes. This directive has been deprecated for the configuration of virtual hosts. By default, Group is set to apache . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-group |
Chapter 14. Updating network configuration | Chapter 14. Updating network configuration You must complete some network configuration to prepare for the overcloud upgrade. 14.1. Updating network interface templates Red Hat OpenStack Platform includes a script to automatically add the missing parameters to your NIC template files. Procedure Log in to the undercloud as the stack user. Source the stackrc file. On the undercloud, create a file called update-nic-templates.sh and include the following content in the file: If you use a custom overcloud name, Set the STACK_NAME variable to the name of your overcloud. The default name for an overcloud stack is overcloud . If you use a custom roles_data file, set the ROLES_DATA variable to the location of the custom file. If you use the default roles_data file, leave the variable as /usr/share/openstack-tripleo-heat-templates/roles_data.yaml If you use a custom network_data file, set the NETWORK_DATA variable to the location of the custom file. If you use the default network_data file, leave the variable as /usr/share/openstack-tripleo-heat-templates/network_data.yaml Run /usr/share/openstack-tripleo-heat-templates/tools/merge-new-params-nic-config-script.py -h to see a list of options to add to the script. Add executable permissions to the script: Optional: If you use a spine-leaf network topology for your RHOSP environment, check the roles_data.yaml file and ensure that it uses the correct role names for the NIC templates for your deployment. The script uses the value of the deprecated_nic_config_name parameter in the roles_data.yaml file. Run the script: The script saves a copy of each custom NIC template and updates each template with the missing parameters. The script also skips any roles that do not have a custom template: 14.2. Maintaining Open vSwitch compatibility during the upgrade Red Hat OpenStack Platform 13 uses Open vSwitch (OVS) as the default ML2 back end for OpenStack Networking (neutron). Newer versions of Red Hat OpenStack Platform use Open Virtual Network (OVN), which expands upon OVS capabilities. However, to ensure a stable upgrade, you must maintain OVS functionality during the duration of the upgrade and then migrate to OVN after you complete the upgrade. To maintain OVS compatibility during the upgrade, include the following environment file as part of your environment file collection: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml Note When you include the neutron-ovs.yaml environment file, check if the neutron-ovs-dvr.yaml environment file is included in your environment file collection. You must include the neutron-ovs.yaml environment file before the neutron-ovs-dvr.yaml file, to avoid failures during the upgrade. Treat this file as part of your deployment until you have completed the migration to OVN. Include the file with all overcloud upgrade and deployment commands: openstack overcloud upgrade prepare openstack overcloud upgrade converge openstack overcloud deploy openstack overcloud update prepare openstack overcloud update converge Any other command that uses environment files. Troubleshooting OVS compatibility If the upgrade process fails because the parameters defined in the neutron-ovs.yaml file are overwriting the parameters defined in the neutron-ovs-dvr.yaml , change the order in which you include these files and run the openstack overcloud upgrade prepare and openstack overcloud upgrade run again on the affected nodes. If one of the affected nodes is a Compute node, remove the openstack-neutron* packages from that node. 14.3. Maintaining composable network compatibility during the upgrade The network_data file format for Red Hat OpenStack Platform 16.2 includes new sections that you can use to define additional subnets and routes within a network. However, if you use a custom network_data file, you can still use the network_data file format from Red Hat OpenStack Platform 13. When you upgrade from Red Hat OpenStack Platform 13 to 16.2, use the Red Hat OpenStack Platform 13 network_data file format during and after the upgrade. For more information about Red Hat OpenStack Platform 13 composable network syntax, see Custom composable networks . When you create new overclouds on Red Hat OpenStack Platform 16.2, use the Red Hat OpenStack Platform 16.2 network_data file format. For more information about Red Hat OpenStack Platform 16.2 composable network syntax, see Custom composable networks . | [
"source ~/stackrc",
"#!/bin/bash STACK_NAME=\"overcloud\" ROLES_DATA=\"/usr/share/openstack-tripleo-heat-templates/roles_data.yaml\" NETWORK_DATA=\"/usr/share/openstack-tripleo-heat-templates/network_data.yaml\" NIC_CONFIG_LINES=USD(openstack stack environment show USDSTACK_NAME | grep \"::Net::SoftwareConfig\" | sed -E 's/ *OS::TripleO::// ; s/::Net::SoftwareConfig:// ; s/ http.*user-files/ /') echo \"USDNIC_CONFIG_LINES\" | while read LINE; do ROLE=USD(echo \"USDLINE\" | awk '{print USD1;}') NIC_CONFIG=USD(echo \"USDLINE\" | awk '{print USD2;}') if [ -f \"USDNIC_CONFIG\" ]; then echo \"Updating template for USDROLE role.\" python3 /usr/share/openstack-tripleo-heat-templates/tools/merge-new-params-nic-config-script.py --tht-dir /usr/share/openstack-tripleo-heat-templates --roles-data USDROLES_DATA --network-data USDNETWORK_DATA --role-name \"USDROLE\" --discard-comments yes --template \"USDNIC_CONFIG\" else echo \"No NIC template detected for USDROLE role. Skipping USDROLE role.\" fi done",
"chmod +x update-nic-templates.sh",
"./update-nic-templates.sh",
"No NIC template detected for BlockStorage role. Skipping BlockStorage role. Updating template for CephStorage role. The original template was saved as: /home/stack/templates/custom-nics/ceph-storage.yaml.20200903144835 The update template was saved as: /home/stack/templates/custom-nics/ceph-storage.yaml Updating template for Compute role. The original template was saved as: /home/stack/templates/custom-nics/compute.yaml.20200903144838 The update template was saved as: /home/stack/templates/custom-nics/compute.yaml Updating template for Controller role. The original template was saved as: /home/stack/templates/custom-nics/controller.yaml.20200903144841 The update template was saved as: /home/stack/templates/custom-nics/controller.yaml No NIC template detected for ObjectStorage role. Skipping ObjectStorage role."
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/updating-network-configuration |
7.150. mod_wsgi | 7.150. mod_wsgi 7.150.1. RHBA-2012:1358 - mod_wsgi bug fix and enhancement update Updated mod_wsgi packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The mod_wsgi packages provide a Apache httpd module, which implements a WSGI compliant interface for hosting Python based web applications. Bug Fix BZ# 670577 Prior to this update, a misleading warning message from the mod_wsgi utilities was logged during startup of the Apache httpd daemon. This update removes this message from the mod_wsgi module. Enhancement BZ# 719409 With this update, access to the SSL connection state is now available in WSGI scripts using the methods "mod_ssl.is_https" and "mod_ssl.var_lookup". All users of mod_wsgi are advised to upgrade to these updated packages, which fix this bug and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/mod_wsgi |
Appendix D. Managing Replicas at Domain Level 0 | Appendix D. Managing Replicas at Domain Level 0 This appendix describes managing replicas at domain level 0 (see Chapter 7, Displaying and Raising the Domain Level ). For documentation on managing replicas at domain level 1 , see: Section 4.5, "Creating the Replica: Introduction" Chapter 6, Managing Replication Topology D.1. Replica Information File During the replica creation process, the ipa-replica-prepare utility creates a replica information file named after the replica server in the /var/lib/ipa/ directory. The replica information file is a GPG-encrypted file containing realm and configuration information for the master server. The ipa-replica-install replica setup script configures a Directory Server instance based on the information contained in the replica information file and initiates the replica initialization process, during which the script copies over data from the master server to the replica. A replica information file can only be used to install a replica on the specific machine for which it was created. It cannot be used to create multiple replicas on multiple machines. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/app.replica |
2.5. Turning on Packet Forwarding | 2.5. Turning on Packet Forwarding In order for the LVS router to forward network packets properly to the real servers, each LVS router node must have IP forwarding turned on in the kernel. Log in as root and change the line which reads net.ipv4.ip_forward = 0 in /etc/sysctl.conf to the following: The changes take effect when you reboot the system. To check if IP forwarding is turned on, issue the following command as root: /sbin/sysctl net.ipv4.ip_forward If the above command returns a 1 , then IP forwarding is enabled. If it returns a 0 , then you can turn it on manually using the following command: /sbin/sysctl -w net.ipv4.ip_forward=1 | [
"net.ipv4.ip_forward = 1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-forwarding-VSA |
1.3. Before Setting Up GFS2 | 1.3. Before Setting Up GFS2 Before you install and set up GFS2, note the following key characteristics of your GFS2 file systems: GFS2 nodes Determine which nodes in the cluster will mount the GFS2 file systems. Number of file systems Determine how many GFS2 file systems to create initially. (More file systems can be added later.) File system name Determine a unique name for each file system. The name must be unique for all lock_dlm file systems over the cluster. Each file system name is required in the form of a parameter variable. For example, this book uses file system names mydata1 and mydata2 in some example procedures. Journals Determine the number of journals for your GFS2 file systems. One journal is required for each node that mounts a GFS2 file system. GFS2 allows you to add journals dynamically at a later point as additional servers mount a file system. For information on adding journals to a GFS2 file system, see Section 3.6, "Adding Journals to a GFS2 File System" . Storage devices and partitions Determine the storage devices and partitions to be used for creating logical volumes (using CLVM) in the file systems. Time protocol Make sure that the clocks on the GFS2 nodes are synchronized. It is recommended that you use the Precision Time Protocol (PTP) or, if necessary for your configuration, the Network Time Protocol (NTP) software provided with your Red Hat Enterprise Linux distribution. Note The system clocks in GFS2 nodes must be within a few minutes of each other to prevent unnecessary inode time stamp updating. Unnecessary inode time stamp updating severely impacts cluster performance. Note You may see performance problems with GFS2 when many create and delete operations are issued from more than one node in the same directory at the same time. If this causes performance problems in your system, you should localize file creation and deletions by a node to directories specific to that node as much as possible. For further recommendations on creating, using, and maintaining a GFS2 file system. see Chapter 2, GFS2 Configuration and Operational Considerations . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-ov-preconfig |
Chapter 8. Message Persistence and Paging | Chapter 8. Message Persistence and Paging AMQ Broker 7 provides persistence through either a message journal or a JDBC store. The method by which the broker stores messages and pages them to disk is different than AMQ 6, and the configuration properties you use to configure message persistence are changed. 8.1. Message Persistence Changes AMQ Broker 7 uses a different type of message journal than AMQ 6, and it does not use a journal index. AMQ 6 used KahaDB for a message store, and it maintained a message journal index to track the position of each message inside the journal. This index enabled the broker to pull paged messages from its journal in batches and place them in its cache. By default, AMQ Broker 7 uses an in-memory message journal from which the broker can dispatch messages. Therefore, AMQ Broker 7 does not use a message journal index. If a broker instance runs out of memory, messages are paged as they arrive at the broker, but before they are queued. These message page files are stored on disk sequentially in the same order in which they arrived. Then, when memory is freed on the broker, the messages are moved from the page file to the journal on the broker. Because the journal is read sequentially, there is no need to keep an index of messages in the journal. In addition, AMQ Broker 7 also offers a different JDBC-based message journal option that was not available in AMQ 6. The AMQ Broker 7 message journal supports the following shared file systems: NFSv4 GFS2 Related Information For more information about the default in-memory message journal, see About Journal-Based Persistence in Configuring AMQ Broker . For more information about the new JDBC-based persistence option, see Configuring JDBC Persistence in Configuring AMQ Broker . 8.2. How Message Persistence is Configured You use the BROKER_INSTANCE_DIR /etc/broker.xml configuration file to configure the broker instance's message journal. The broker.xml configuration file contains the following default message journal configuration properties: <core> <name>0.0.0.0</name> <persistence-enabled>true</persistence-enabled> <journal-type>ASYNCIO</journal-type> <paging-directory>./data/paging</paging-directory> <bindings-directory>./data/bindings</bindings-directory> <journal-directory>./data/journal</journal-directory> <large-messages-directory>./data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>-1</journal-pool-files> <journal-buffer-timeout>744000</journal-buffer-timeout> <disk-scan-period>5000</disk-scan-period> <max-disk-usage>90</max-disk-usage> <global-max-size>104857600</global-max-size> ... </core> To configure the message journal, you can change the default values for any of the journal configuration properties. You can also add additional configuration properties. 8.3. Message Persistence Configuration Property Changes AMQ 6 and AMQ Broker 7 both offer a number of configuration properties to control how the broker persists messages. This section compares the configuration properties in the AMQ 6 KahaDB journal to the equivalent properties in the AMQ Broker 7 in-memory message journal. For complete details on each message persistence configuration property for the in-memory message journal, see the following: The Bindings Journal in Configuring AMQ Broker Messaging Journal Configuration Elements in Configuring AMQ Broker 8.3.1. Journal Size and Management The following table compares the journal size and management configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 The time interval between cleaning up data logs that are no longer used cleanupInterval The default is 30000 ms. No equivalent. In AMQ Broker 7, journal files that exceed the pool size are no longer used. The number of message store GC cycles that must be completed without cleaning up other files before compaction is triggered compactAcksAfterNoGC No equivalent. In AMQ Broker 7, compaction is not related to particular record types. Whether compaction should be run when the message store is still growing, or if it should only occur when it has stopped growing compactAcksIgnoresStoreGrowth The default is false . No equivalent. The minimum number of journal files that can be stored on the broker before it will compact them No equivalent. <journal-compact-min-files> The default is 10. If you set this value to 0, compaction will be deactivated. The threshold to reach before compaction starts No equivalent. <journal-compact-percentage> The default is 30%. When less than this percentage is considered to be live data, compaction will start. The path to the top-level folder that holds the message store's data files directory AMQ Broker 7 has a separate directory for each type of journal: <journal-directory> - The default is /data/journal . <bindings-directory> - The default is /data/bindings . <paging-directory> - The default is /data/paging . <large-message-directory> - The default is /data/large-messages . Whether the bindings directory should be created automatically if it does not already exist No equivalent. <create-bindings-dir> The default is true . Whether the journal directory should be created automatically if it does not already exist No equivalent. <create-journal-dir> The default is true . Whether the message store should periodically compact older journal log files that contain only message acknowledgements enableAckCompaction No equivalent. The maximum size of the data log files journalMaxFileLength The default is 32 MB. <journal-file-size> The default is 10485760 bytes (10 MiB). The policy that the broker should use to preallocate the journal files when a new journal file is needed preallocationStrategy The default is sparse_file . No equivalent. By default, preallocated journal files are typically filled with zeroes, but it can vary depending on the file system. The policy the broker should use to preallocate the journal files preallocationScope The default is entire_journal . AMQ Broker 7 automatically preallocates the journal files specified by <journal-min-files> when the broker instance is started. The journal type (either NIO or AIO) No equivalent. <journal-type> You can choose either NIO (Java NIO journal), or ASYNCIO (Linux asynchronous I/O journal). The minimum number of files that the journal should maintain No equivalent. <journal-min-files> The number of journal files the broker should keep when reclaiming files No equivalent. <journal-pool-files> The default is -1, which means the broker instance will never delete files on the journal once created. 8.3.2. Write Boundaries The following table compares the write boundary configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 The time interval between writing the metadata cache to disk checkpointInterval The default is 5000 ms. No equivalent. Whether the message store should dispatch queue messages to clients concurrently with message storage concurrentStoreAndDispatchQueues The default is true . No equivalent. Whether the message store should dispatch topic messages to interested clients concurrently with message storage concurrentStoreAndDispatchTopics The default is false . No equivalent. Whether a disk sync should be performed after each non-transactional journal write enableJournalDiskSyncs The default is true . <journal-sync-transactional> Flushes transaction data to disk whenever a transaction boundary is reached (commit, prepare, and rollback). The default is true . <journal-sync-nontransactional> Flushes non-transactional message data to disk (sends and acknowledgements). The default is true . When to flush the entire journal buffer No equivalent. <journal-buffer-timeout> The default for NIO is 3,333,333 nanoseconds, and the default for AIO is 500,000 nanoseconds. The amount of data to buffer between journal disk writes journalMaxWriteBatchSize The default is 4000 bytes. No equivalent. The size of the task queue used to buffer the journal's write requests maxAsyncJobs The default is 10000. <journal-max-io> This property controls the maximum number of write requests that can be in the I/O queue at any given point. The default for NIO is 1, and the default for AIO is 500. Whether to use fdatasync on journal writes No equivalent. <journal-datasync> The default is true . 8.3.3. Index Configuration AMQ 6 has a number of configuration properties for configuring the journal index. Because AMQ Broker 7 does not use journal indexes, you do not need to configure any of these properties for your broker instance. 8.3.4. Journal Archival AMQ 6 has several configuration properties for controlling which files are archived and where the archives are stored. In AMQ Broker 7, however, when old journal files are no longer needed, the broker reuses them instead of archiving them. Therefore, you do not need to configure any journal archival properties for your broker instance. 8.3.5. Journal Recovery AMQ 6 has several configuration properties for controlling how the broker checks for corrupted journal files and what to do when it encounters a missing journal file. In AMQ Broker 7, however, you do not need to configure any journal recovery properties for your broker instance. Journal files have a different format in AMQ Broker 7, which should prevent a corrupted entry in the journal from corrupting the entire journal file. Even if the journal is partially damaged, the broker should still be able to extract data from the undamaged entries. | [
"<core> <name>0.0.0.0</name> <persistence-enabled>true</persistence-enabled> <journal-type>ASYNCIO</journal-type> <paging-directory>./data/paging</paging-directory> <bindings-directory>./data/bindings</bindings-directory> <journal-directory>./data/journal</journal-directory> <large-messages-directory>./data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>-1</journal-pool-files> <journal-buffer-timeout>744000</journal-buffer-timeout> <disk-scan-period>5000</disk-scan-period> <max-disk-usage>90</max-disk-usage> <global-max-size>104857600</global-max-size> </core>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/migrating_to_red_hat_amq_7/message_persistence |
Chapter 55. CruiseControlSpec schema reference | Chapter 55. CruiseControlSpec schema reference Used in: KafkaSpec Full list of CruiseControlSpec schema properties Configures a Cruise Control cluster. Configuration options relate to: Goals configuration Capacity limits for resource distribution goals 55.1. config Use the config properties to configure Cruise Control options as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Cruise Control documentation . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Security (encryption, authentication, and authorization) Connection to the Kafka cluster Client ID configuration ZooKeeper connectivity Web server configuration Self healing Properties with the following prefixes cannot be set: bootstrap.servers capacity.config.file client.id failed.brokers.zk.path kafka.broker.failure.detection.enable metric.reporter.sampler.bootstrap.servers network. request.reason.required security. self.healing. ssl. topic.config.provider.class two.step. webserver.accesslog. webserver.api.urlprefix webserver.http. webserver.session.path zookeeper. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Cruise Control, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Configuration for webserver properties to enable Cross-Origin Resource Sharing (CORS) Example Cruise Control configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true webserver.http.cors.origin: "*" webserver.http.cors.exposeheaders: "User-Task-ID,Content-Type" # ... 55.2. Cross-Origin Resource Sharing (CORS) Cross-Origin Resource Sharing (CORS) is a HTTP mechanism for controlling access to REST APIs. Restrictions can be on access methods or originating URLs of client applications. You can enable CORS with Cruise Control using the webserver.http.cors.enabled property in the config . When enabled, CORS permits read access to the Cruise Control REST API from applications that have different originating URLs than Streams for Apache Kafka. This allows applications from specified origins to use GET requests to fetch information about the Kafka cluster through the Cruise Control API. For example, applications can fetch information on the current cluster load or the most recent optimization proposal. POST requests are not permitted. Note For more information on using CORS with Cruise Control, see REST APIs in the Cruise Control Wiki . Enabling CORS for Cruise Control You enable and configure CORS in Kafka.spec.cruiseControl.config . apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... config: webserver.http.cors.enabled: true 1 webserver.http.cors.origin: "*" 2 webserver.http.cors.exposeheaders: "User-Task-ID,Content-Type" 3 # ... 1 Enables CORS. 2 Specifies permitted origins for the Access-Control-Allow-Origin HTTP response header. You can use a wildcard or specify a single origin as a URL. If you use a wildcard, a response is returned following requests from any origin. 3 Exposes specified header names for the Access-Control-Expose-Headers HTTP response header. Applications in permitted origins can read responses with the specified headers. 55.3. Cruise Control REST API security The Cruise Control REST API is secured with HTTP Basic authentication and SSL to protect the cluster against potentially destructive Cruise Control operations, such as decommissioning Kafka brokers. We recommend that Cruise Control in Streams for Apache Kafka is only used with these settings enabled . However, it is possible to disable these settings by specifying the following Cruise Control configuration: To disable the built-in HTTP Basic authentication, set webserver.security.enable to false . To disable the built-in SSL, set webserver.ssl.enable to false . Cruise Control configuration to disable API authorization, authentication, and SSL apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: config: webserver.security.enable: false webserver.ssl.enable: false # ... 55.4. brokerCapacity Cruise Control uses capacity limits to determine if optimization goals for resource capacity limits are being broken. There are four goals of this type: DiskCapacityGoal - Disk utilization capacity CpuCapacityGoal - CPU utilization capacity NetworkInboundCapacityGoal - Network inbound utilization capacity NetworkOutboundCapacityGoal - Network outbound utilization capacity You specify capacity limits for Kafka broker resources in the brokerCapacity property in Kafka.spec.cruiseControl . They are enabled by default and you can change their default values. Capacity limits can be set for the following broker resources: cpu - CPU resource in millicores or CPU cores (Default: 1) inboundNetwork - Inbound network throughput in byte units per second (Default: 10000KiB/s) outboundNetwork - Outbound network throughput in byte units per second (Default: 10000KiB/s) For network throughput, use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. Note Disk and CPU capacity limits are automatically generated by Streams for Apache Kafka, so you do not need to set them. In order to guarantee accurate rebalance proposals when using CPU goals, you can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources . That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals. In cases where you cannot set CPU requests equal to CPU limits in Kafka.spec.kafka.resources , you can set the CPU capacity manually for the same accuracy. Example Cruise Control brokerCapacity configuration using bibyte units apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... brokerCapacity: cpu: "2" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s # ... 55.5. Capacity overrides Brokers might be running on nodes with heterogeneous network or CPU resources. If that's the case, specify overrides that set the network capacity and CPU limits for each broker. The overrides ensure an accurate rebalance between the brokers. Override capacity limits can be set for the following broker resources: cpu - CPU resource in millicores or CPU cores (Default: 1) inboundNetwork - Inbound network throughput in byte units per second (Default: 10000KiB/s) outboundNetwork - Outbound network throughput in byte units per second (Default: 10000KiB/s) An example of Cruise Control capacity overrides configuration using bibyte units apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... brokerCapacity: cpu: "1" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s overrides: - brokers: [0] cpu: "2.755" inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] cpu: 3000m inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s CPU capacity is determined using configuration values in the following order of precedence, with the highest priority first: Kafka.spec.cruiseControl.brokerCapacity.overrides.cpu that define custom CPU capacity limits for individual brokers Kafka.cruiseControl.brokerCapacity.cpu that defines custom CPU capacity limits for all brokers in the kafka cluster Kafka.spec.kafka.resources.requests.cpu that defines the CPU resources that are reserved for each broker in the Kafka cluster. Kafka.spec.kafka.resources.limits.cpu that defines the maximum CPU resources that can be consumed by each broker in the Kafka cluster. This order of precedence is the sequence in which different configuration values are considered when determining the actual capacity limit for a Kafka broker. For example, broker-specific overrides take precedence over capacity limits for all brokers. If none of the CPU capacity configurations are specified, the default CPU capacity for a Kafka broker is set to 1 CPU core. For more information, refer to the BrokerCapacity schema reference . 55.6. logging Cruise Control has its own configurable logger: rootLogger.level Cruise Control uses the Apache log4j2 logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: cruiseControl: # ... logging: type: inline loggers: rootLogger.level: INFO logger.exec.name: com.linkedin.kafka.cruisecontrol.executor.Executor 1 logger.exec.level: TRACE 2 logger.go.name: com.linkedin.kafka.cruisecontrol.analyzer.GoalOptimizer 3 logger.go.level: DEBUG 4 # ... 1 Creates a logger for the Cruise Control Executor class. 2 Sets the logging level for the Executor class. 3 Creates a logger for the Cruise Control GoalOptimizer class. 4 Sets the logging level for the GoalOptimizer class. Note When investigating an issue with Cruise Control, it's usually sufficient to change the rootLogger to DEBUG to get more detailed logs. However, keep in mind that setting the log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: cruiseControl: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: cruise-control-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 55.7. CruiseControlSpec schema properties Property Property type Description image string The container image used for Cruise Control pods. If no image name is explicitly specified, the image name corresponds to the name specified in the Cluster Operator configuration. If an image name is not defined in the Cluster Operator configuration, a default value is used. tlsSidecar TlsSidecar The tlsSidecar property has been deprecated. TLS sidecar configuration. resources ResourceRequirements CPU and memory resources to reserve for the Cruise Control container. livenessProbe Probe Pod liveness checking for the Cruise Control container. readinessProbe Probe Pod readiness checking for the Cruise Control container. jvmOptions JvmOptions JVM Options for the Cruise Control container. logging InlineLogging , ExternalLogging Logging configuration (Log4j 2) for Cruise Control. template CruiseControlTemplate Template to specify how Cruise Control resources, Deployments and Pods , are generated. brokerCapacity BrokerCapacity The Cruise Control brokerCapacity configuration. config map The Cruise Control configuration. For a full list of configuration options refer to https://github.com/linkedin/cruise-control/wiki/Configurations . Note that properties with the following prefixes cannot be set: bootstrap.servers, client.id, zookeeper., network., security., failed.brokers.zk.path,webserver.http., webserver.api.urlprefix, webserver.session.path, webserver.accesslog., two.step., request.reason.required,metric.reporter.sampler.bootstrap.servers, capacity.config.file, self.healing., ssl., kafka.broker.failure.detection.enable, topic.config.provider.class (with the exception of: ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, webserver.http.cors.enabled, webserver.http.cors.origin, webserver.http.cors.exposeheaders, webserver.security.enable, webserver.ssl.enable). metricsConfig JmxPrometheusExporterMetrics Metrics configuration. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: webserver.http.cors.enabled: true 1 webserver.http.cors.origin: \"*\" 2 webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: config: webserver.security.enable: false webserver.ssl.enable: false",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: cpu: \"2\" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: cpu: \"1\" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s overrides: - brokers: [0] cpu: \"2.755\" inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] cpu: 3000m inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: inline loggers: rootLogger.level: INFO logger.exec.name: com.linkedin.kafka.cruisecontrol.executor.Executor 1 logger.exec.level: TRACE 2 logger.go.name: com.linkedin.kafka.cruisecontrol.analyzer.GoalOptimizer 3 logger.go.level: DEBUG 4 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: cruise-control-log4j.properties #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-cruisecontrolspec-reference |
GitOps | GitOps Red Hat OpenShift Service on AWS 4 A declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/gitops/index |
Chapter 1. Content and patch management with Red Hat Satellite | Chapter 1. Content and patch management with Red Hat Satellite With Red Hat Satellite, you can provide content and apply patches to hosts systematically in all lifecycle stages. 1.1. Content flow in Red Hat Satellite Content flow in Red Hat Satellite involves management and distribution of content from external sources to hosts. Content in Satellite flows from external content sources to Satellite Server . Capsule Servers mirror the content from Satellite Server to hosts . External content sources You can configure many content sources with Satellite. The supported content sources include the Red Hat Customer Portal, Git repositories, Ansible collections, Docker Hub, SCAP repositories, or internal data stores of your organization. Satellite Server On your Satellite Server, you plan and manage the content lifecycle. Capsule Servers By creating Capsule Servers, you can establish content sources in various locations based on your needs. For example, you can establish a content source for each geographical location or multiple content sources for a data center with separate networks. Hosts By assigning a host system to a Capsule Server or directly to your Satellite Server, you ensure the host receives the content they provide. Hosts can be physical or virtual. Additional resources See Chapter 4, Major Satellite components for details. See Managing Red Hat subscriptions in Managing content for information about Content Delivery Network (CDN). 1.2. Content views in Red Hat Satellite A content view is a deliberately curated subset of content that your hosts can access. By creating a content view, you can define the software versions used by a particular environment or Capsule Server. Each content view creates a set of repositories across each environment. Your Satellite Server stores and manages these repositories. For example, you can create content views in the following ways: A content view with older package versions for a production environment and another content view with newer package versions for a Development environment. A content view with a package repository required by an operating system and another content view with a package repository required by an application. A composite content view for a modular approach to managing content views. For example, you can use one content view for content for managing an operating system and another content view for content for managing an application. By creating a composite content view that combines both content views, you create a new repository that merges the repositories from each of the content views. However, the repositories for the content views still exist and you can keep managing them separately as well. Default Organization View A Default Organization View is an application-controlled content view for all content that is synchronized to Satellite. You can register a host to the Library environment on Satellite to consume the Default Organization View without configuring content views and lifecycle environments. Promoting a content view across environments When you promote a content view from one environment to the environment in the application lifecycle, Satellite updates the repository and publishes the packages. Example 1.1. Promoting a package from Development to Testing The repositories for Testing and Production contain the my-software -1.0-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 1 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm my-software -1.0-0.noarch.rpm If you promote Version 2 of the content view from Development to Testing , the repository for Testing updates to contain the my-software -1.1-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 2 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm This ensures hosts are designated to a specific environment but receive updates when that environment uses a new version of the content view. Additional resources For more information, see Managing content views in Managing content . 1.3. Content types in Red Hat Satellite With Red Hat Satellite, you can import and manage many content types. For example, Satellite supports the following content types: RPM packages Import RPM packages from repositories related to your Red Hat subscriptions. Satellite Server downloads the RPM packages from the Red Hat Content Delivery Network and stores them locally. You can use these repositories and their RPM packages in content views. Kickstart trees Import the Kickstart trees to provision a host. New systems access these Kickstart trees over a network to use as base content for their installation. Red Hat Satellite contains predefined Kickstart templates. You can also create your own Kickstart templates. ISO and KVM images Download and manage media for installation and provisioning. For example, Satellite downloads, stores, and manages ISO images and guest images for specific Red Hat Enterprise Linux and non-Red Hat operating systems. Custom file type Manage custom content for any type of file you require, such as SSL certificates, ISO images, and OVAL files. 1.4. Additional resources For information about how to manage content with Satellite, see Managing content . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/overview_concepts_and_deployment_considerations/Content-and-Patch-Management-with-Satellite_planning |
8.5. Configuring an IdM Server to Run in a TLS 1.2 Environment | 8.5. Configuring an IdM Server to Run in a TLS 1.2 Environment See Configuring TLS 1.2 for Identity Management in RHEL 6.9 in Red Hat Knowledgebase for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/server-tls-12 |
Appendix A. DNF commands list | Appendix A. DNF commands list In the following sections, examine DNF commands for listing, installing, and removing content in Red Hat Enterprise Linux 9. A.1. Commands for listing content in RHEL 9 The following are the commonly used DNF commands for finding content and its details in Red Hat Enterprise Linux 9: Command Description dnf search term Search for a package by using term related to the package. dnf repoquery package Search for enabled DNF repositories for a selected package and its version. dnf list List information about all installed and available packages. dnf list --installed dnf repoquery --installed List all packages installed on your system. dnf list --available dnf repoquery List all packages in all enabled repositories that are available to install. dnf repolist List all enabled repositories on your system. dnf repolist --disabled List all disabled repositories on your system. dnf repolist --all List both enabled and disabled repositories. dnf repoinfo List additional information about the repositories. dnf info package_name dnf repoquery --info package_name Display details of an available package. dnf repoquery --info --installed package_name Display details of a package installed on your system. dnf module list List modules and their current status. dnf module info module_name Display details of a module. dnf module list module_name Display the current status of a module. dnf module info --profile module_name Display packages associated with available profiles of a selected module. dnf module info --profile module_name:stream Display packages associated with available profiles of a module by using a specified stream. dnf module provides package Determine which modules, streams, and profiles provide a package. Note that if the package is available outside any modules, the output of this command is empty. dnf group summary View the number of installed and available groups. dnf group list List all installed and available groups. dnf group info group_name List mandatory and optional packages included in a particular group. A.2. Commands for installing content in RHEL 9 The following are the commonly used DNF commands for installing content in Red Hat Enterprise Linux 9: Command Description dnf install package_name Install a package. If the package is provided by a module stream, dnf resolves the required module stream and enables it automatically while installing this package. This also happens recursively for all package dependencies. If more module streams satisfy the requirement, the default ones are used. dnf install package_name_1 package_name_2 Install multiple packages and their dependencies simultaneously. dnf install package_name.arch Specify the architecture of the package by appending it to the package name when installing packages on a multilib system (AMD64, Intel 64 machine). dnf install /usr/sbin/binary_file Install a binary by using the path to the binary as an argument. dnf install /path/ Install a previously downloaded package from a local directory. dnf install package_url Install a remote package by using a package URL. dnf module enable module_name:stream Enable a module by using a specific stream. Note that running this command does not install any RPM packages. dnf module install module_name:stream dnf install @ module_name:stream Install a default profile from a specific module stream. Note that running this command also enables the specified stream. dnf module install module_name:stream/profile dnf install @ module_name:stream/profile Install a selected profile by using a specific stream. dnf group install group_name Install a package group by a group name. dnf group install group_ID Install a package group by the groupID. A.3. Commands for removing content in RHEL 9 The following are the commonly used DNF commands for removing content in Red Hat Enterprise Linux 9: Command Description dnf remove package_name Remove a particular package and all dependent packages. dnf remove package_name_1 package_name_2 Remove multiple packages and their unused dependencies simultaneously. dnf group remove group_name Remove a package group by the group name. dnf group remove group_ID Remove a package group by the groupID. dnf module remove --all module_name:stream Remove all packages from the specified stream. Note that running this command can remove critical packages from your system. dnf module remove module_name:stream/profile Remove packages from an installed profile. dnf module remove module_name:stream Remove packages from all installed profiles within the specified stream. dnf module reset module_name Reset a module to the initial state. Note that running this command does not remove packages from the specified module. dnf module disable module_name Disable a module and all its streams. Note that running this command does not remove packages from the specified module. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_yum-commands-list_managing-software-with-the-dnf-tool |
Chapter 10. Networking | Chapter 10. Networking 10.1. Networking overview OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with OpenShift Container Platform networking and its ecosystem. Note You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. The following figure illustrates the typical network setup of OpenShift Virtualization. Other configurations are also possible. Figure 10.1. OpenShift Virtualization networking overview Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads. You can connect VMs to the default pod network and to any number of secondary networks. The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality. Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins. The default pod network is overlay-based, tunneled through the underlying machine network. The machine network can be defined over a selected set of network interface controllers (NICs). Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks. Important Connecting VMs directly to the underlay network is not supported on Red Hat OpenShift Service on AWS. Note Connecting VMs to user-defined networks with the layer2 topology is recommended on public clouds. Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network. 10.1.1. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. UserDefinedNetwork (UDN) A namespace-scoped CRD introduced by the user-defined network API that can be used to create a tenant network that isolates the tenant namespace from other namespaces. ClusterUserDefinedNetwork (CUDN) A cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces. Node network configuration policy (NNCP) A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. 10.1.2. Using the default pod network Connecting a virtual machine to the default pod network Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification. Exposing a virtual machine as a service You can expose a VM within the cluster or outside the cluster by creating a Service object. For on-premise clusters, you can configure a load balancing service by using the MetalLB Operator. You can install the MetalLB Operator by using the OpenShift Container Platform web console or the CLI. 10.1.3. Configuring a primary user-defined network Connecting a virtual machine to a primary user-defined network You can connect a virtual machine (VM) to a user-defined network (UDN) on the VM's primary interface. The primary user-defined network replaces the default pod network to connect pods and VMs in selected namespaces. Cluster administrators can configure a primary UserDefinedNetwork CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork CRD to create a shared OVN layer2 network across multiple namespaces. User-defined networks with the layer2 overlay topology are useful for VM workloads, and a good alternative to secondary networks in environments where physical network access is limited, such as the public cloud. The layer2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT), and also provides persistent IP addresses that are preserved between reboots and during live migration. 10.1.4. Configuring VM secondary network interfaces You can connect a virtual machine to a secondary network by using Linux bridge, SR-IOV and OVN-Kubernetes CNI plugins. You can list multiple secondary networks and interfaces in the VM specification. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a VM to an OVN-Kubernetes secondary network. OpenShift Virtualization supports the layer2 and localnet topologies for OVN-Kubernetes. The localnet topology is the recommended way of exposing VMs to the underlying physical network, with or without VLAN encapsulation. A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes CNI plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD). Note For localnet topology, you must configure an OVS bridge by creating a NodeNetworkConfigurationPolicy object before creating the NAD. Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification. Connecting a virtual machine to an SR-IOV network You can use Single Root I/O Virtualization (SR-IOV) network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You must install the SR-IOV Network Operator on your cluster to manage SR-IOV network devices and network attachments. You can connect a VM to an SR-IOV network by performing the following steps: Configure an SR-IOV network device by creating a SriovNetworkNodePolicy CRD. Configure an SR-IOV network by creating an SriovNetwork object. Connect the VM to the SR-IOV network by including the network details in the VM configuration. Connecting a virtual machine to a Linux bridge network Install the Kubernetes NMState Operator to configure Linux bridges, VLANs, and bonding for your secondary networks. The OVN-Kubernetes localnet topology is the recommended way of connecting a VM to the underlying physical network, but OpenShift Virtualization also supports Linux bridge networks. Note You cannot directly attach to the default machine network when using Linux bridge networks. You can create a Linux bridge network and attach a VM to the network by performing the following steps: Configure a Linux bridge network device by creating a NodeNetworkConfigurationPolicy custom resource definition (CRD). Configure a Linux bridge network by creating a NetworkAttachmentDefinition CRD. Connect the VM to the Linux bridge network by including the network details in the VM configuration. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. OpenShift Virtualization also supports hot plugging secondary interfaces that use the SR-IOV binding. Using DPDK with SR-IOV The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and VMs to run DPDK workloads over SR-IOV networks. Configuring a dedicated network for live migration You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. Accessing a virtual machine by using the cluster FQDN You can access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN). Configuring and viewing IP addresses You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 10.1.4.1. Comparing Linux bridge CNI and OVN-Kubernetes localnet topology The following table provides a comparison of features available when using the Linux bridge CNI compared to the localnet topology for an OVN-Kubernetes plugin: Table 10.1. Linux bridge CNI compared to an OVN-Kubernetes localnet topology Feature Available on Linux bridge CNI Available on OVN-Kubernetes localnet Layer 2 access to the underlay native network Only on secondary network interface controllers (NICs) Yes Layer 2 access to underlay VLANs Yes Yes Network policies No Yes Managed IP pools No Yes MAC spoof filtering Yes Yes 10.1.5. Integrating with OpenShift Service Mesh Connecting a virtual machine to a service mesh OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines. 10.1.6. Managing MAC address pools Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. 10.1.7. Configuring SSH access Configuring SSH access to virtual machines You can configure SSH access to VMs by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address. 10.2. Connecting a virtual machine to the default pod network You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode. Note Traffic passing through network interfaces to the default pod network is interrupted during live migration. 10.2.1. Configuring masquerade mode from the command line You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file. Prerequisites The virtual machine must be configured to use DHCP to acquire IPv4 addresses. Procedure Edit the interfaces spec of your virtual machine configuration file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 # ... networks: - name: default pod: {} 1 Connect using masquerade mode. 2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . Note Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. Create the virtual machine: USD oc create -f <vm-name>.yaml 10.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120 . You can edit this value based on your network requirements. When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine. Prerequisites The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack. Procedure In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 # ... networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4 1 Connect using masquerade mode. 2 Allows incoming traffic on port 80 to the virtual machine. 3 The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . 4 The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . Create the virtual machine in the namespace: USD oc create -f example-vm-ipv6.yaml Verification To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address: USD oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}" 10.2.3. About jumbo frames support When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes. The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways: libvirt : If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device. DHCP: If the guest DHCP client can read the MTU value from the DHCP server response. Note For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh or a similar tool. This is because the Windows DHCP client does not read the MTU value. 10.2.4. Additional resources Changing the MTU for the cluster network Optimizing the MTU for your network 10.3. Connecting a virtual machine to a primary user-defined network You can connect a virtual machine (VM) to a user-defined network (UDN) on the VM's primary interface by using the OpenShift Container Platform web console or the CLI. The primary user-defined network replaces the default pod network in your specified namespace. Unlike the pod network, you can define the primary UDN per project, where each project can use its specific subnet and topology. OpenShift Virtualization supports the namespace-scoped UserDefinedNetwork and the cluster-scoped ClusterUserDefinedNetwork custom resource definitions (CRD). Cluster administrators can configure a primary UserDefinedNetwork CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork CRD to create a shared OVN network across multiple namespaces. Note You must add the k8s.ovn.org/primary-user-defined-network label when you create a namespace that is to be used with user-defined networks. With the layer 2 topology, OVN-Kubernetes creates an overlay network between nodes. You can use this overlay network to connect VMs on different nodes without having to configure any additional physical networking infrastructure. The layer 2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT) because persistent IP addresses are preserved across cluster nodes during live migration. You must consider the following limitations before implementing a primary UDN: You cannot use the virtctl ssh command to configure SSH access to a VM. You cannot use the oc port-forward command to forward ports to a VM. You cannot use headless services to access a VM. You cannot define readiness and liveness probes to configure VM health checks. Note OpenShift Virtualization currently does not support secondary user-defined networks. 10.3.1. Creating a primary user-defined network by using the web console You can use the OpenShift Container Platform web console to create a primary namespace-scoped UserDefinedNetwork or a cluster-scoped ClusterUserDefinedNetwork CRD. The UDN serves as the default primary network for pods and VMs that you create in namespaces associated with the network. 10.3.1.1. Creating a namespace for user-defined networks by using the web console You can create a namespace to be used with primary user-defined networks (UDNs) by using the OpenShift Container Platform web console. Prerequisites Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions. Procedure From the Administrator perspective, click Administration Namespaces . Click Create Namespace . In the Name field, specify a name for the namespace. The name must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character. In the Labels field, add the k8s.ovn.org/primary-user-defined-network label. Optional: If the namespace is to be used with an existing cluster-scoped UDN, add the appropriate labels as defined in the spec.namespaceSelector field in the ClusterUserDefinedNetwork custom resource. Optional: Specify a default network policy. Click Create to create the namespace. 10.3.1.2. Creating a primary namespace-scoped user-defined network by using the web console You can create an isolated primary network in your project namespace by creating a UserDefinedNetwork custom resource in the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console as a user with cluster-admin permissions. You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label. For more information, see "Creating a namespace for user-defined networks by using the web console". Procedure From the Administrator perspective, click Networking UserDefinedNetworks . Click Create UserDefinedNetwork . From the Project name list, select the namespace that you previously created. Specify a value in the Subnet field. Click Create . The user-defined network serves as the default primary network for pods and virtual machines that you create in this namespace. 10.3.1.3. Creating a primary cluster-scoped user-defined network by using the web console You can connect multiple namespaces to the same primary user-defined network (UDN) by creating a ClusterUserDefinedNetwork custom resource in the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console as a user with cluster-admin permissions. Procedure From the Administrator perspective, click Networking UserDefinedNetworks . From the Create list, select ClusterUserDefinedNetwork . In the Name field, specify a name for the cluster-scoped UDN. Specify a value in the Subnet field. In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to. Click Create . The cluster-scoped UDN serves as the default primary network for pods and virtual machines located in namespaces that contain the labels that you specified in step 5. steps Create namespaces that are associated with the cluster-scoped UDN 10.3.2. Creating a primary user-defined network by using the CLI You can create a primary UserDefinedNetwork or ClusterUserDefinedNetwork CRD by using the CLI. 10.3.2.1. Creating a namespace for user-defined networks by using the CLI You can create a namespace to be used with primary user-defined networks (UDNs) by using the CLI. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure Create a Namespace object as a YAML file similar to the following example: apiVersion: v1 kind: Namespace metadata: name: udn_namespace labels: k8s.ovn.org/primary-user-defined-network: "" 1 # ... 1 This label is required for the namespace to be associated with a UDN. If the namespace is to be used with an existing cluster UDN, you must also add the appropriate labels that are defined in the spec.namespaceSelector field of the ClusterUserDefinedNetwork custom resource. Apply the Namespace manifest by running the following command: oc apply -f <filename>.yaml 10.3.2.2. Creating a primary namespace-scoped user-defined network by using the CLI You can create an isolated primary network in your project namespace by using the CLI. You must use the OVN-Kubernetes layer 2 topology and enable persistent IP address allocation in the user-defined network (UDN) configuration to ensure VM live migration support. Prerequisites You have installed the OpenShift CLI ( oc ). You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label. Procedure Create a UserDefinedNetwork object to specify the custom network configuration: Example UserDefinedNetwork manifest apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-l2-net 1 namespace: my-namespace 2 spec: topology: Layer2 3 layer2: role: Primary 4 subnets: - "10.0.0.0/24" - "2001:db8::/60" ipam: lifecycle: Persistent 5 1 Specifies the name of the UserDefinedNetwork custom resource. 2 Specifies the namespace in which the VM is located. The namespace must have the k8s.ovn.org/primary-user-defined-network label. The namespace must not be default , an openshift-* namespace, or match any global namespaces that are defined by the Cluster Network Operator (CNO). 3 Specifies the topological configuration of the network. The required value is Layer2 . A Layer2 topology creates a logical switch that is shared by all nodes. 4 Specifies if the UDN is primary or secondary. OpenShift Virtualization only supports the Primary role. This means that the UDN acts as the primary network for the VM and all default traffic passes through this network. 5 Specifies that virtual workloads have consistent IP addresses across reboots and migration. The spec.layer2.subnets field is required when ipam.lifecycle: Persistent is specified. Apply the UserDefinedNetwork manifest by running the following command: USD oc apply -f --validate=true <filename>.yaml 10.3.2.3. Creating a primary cluster-scoped user-defined network by using the CLI You can connect multiple namespaces to the same primary user-defined network (UDN) to achieve native tenant isolation by using the CLI. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Create a ClusterUserDefinedNetwork object to specify the custom network configuration: Example ClusterUserDefinedNetwork manifest kind: ClusterUserDefinedNetwork metadata: name: cudn-l2-net 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name operator: In 4 values: ["red-namespace", "blue-namespace"] network: topology: Layer2 5 layer2: role: Primary 6 ipam: lifecycle: Persistent subnets: - 203.203.0.0/16 1 Specifies the name of the ClusterUserDefinedNetwork custom resource. 2 Specifies the set of namespaces that the cluster UDN applies to. The namespace selector must not point to default , an openshift-* namespace, or any global namespaces that are defined by the Cluster Network Operator (CNO). 3 Specifies the type of selector. In this example, the matchExpressions selector selects objects that have the label kubernetes.io/metadata.name with the value red-namespace or blue-namespace . 4 Specifies the type of operator. Possible values are In , NotIn , and Exists . 5 Specifies the topological configuration of the network. The required value is Layer2 . A Layer2 topology creates a logical switch that is shared by all nodes. 6 Specifies if the UDN is primary or secondary. OpenShift Virtualization only supports the Primary role. This means that the UDN acts as the primary network for the VM and all default traffic passes through this network. Apply the ClusterUserDefinedNetwork manifest by running the following command: USD oc apply -f --validate=true <filename>.yaml steps Create namespaces that are associated with the cluster-scoped UDN 10.3.3. Attaching a virtual machine to the primary user-defined network by using the CLI You can connect a virtual machine (VM) to the primary user-defined network (UDN) by requesting the pod network attachment, and configuring the interface binding. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Edit the VirtualMachine manifest to add the UDN interface details, as in the following example: Example VirtualMachine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: my-namespace 1 spec: template: spec: domain: devices: interfaces: - name: udn-l2-net 2 binding: name: l2bridge 3 # ... networks: - name: udn-l2-net 4 pod: {} # ... 1 The namespace in which the VM is located. This value must match the namespace in which the UDN is defined. 2 The name of the user-defined network interface. 3 The name of the binding plugin that is used to connect the interface to the VM. The required value is l2bridge . 4 The name of the network. This must match the value of the spec.template.spec.domain.devices.interfaces.name field. Apply the VirtualMachine manifest by running the following command: USD oc apply -f <filename>.yaml 10.3.4. Additional resources About user-defined networks 10.4. Exposing a virtual machine by using a service You can expose a virtual machine within the cluster or outside the cluster by creating a Service object. 10.4.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator. Additional resources Installing the MetalLB Operator Configuring services to use MetalLB 10.4.2. Dual-stack support If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object. The spec.ipFamilyPolicy field can be set to one of the following values: SingleStack The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. PreferDualStack The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. RequireDualStack This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack . The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges. You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values: [IPv4] [IPv6] [IPv4, IPv6] [IPv6, IPv4] 10.4.3. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 10.4.4. Additional resources Configuring ingress cluster traffic using a NodePort Configuring ingress cluster traffic using a load balancer 10.5. Accessing a virtual machine by using its internal FQDN You can access a virtual machine (VM) that is connected to the default internal pod network on a stable fully qualified domain name (FQDN) by using headless services. A Kubernetes headless service is a form of service that does not allocate a cluster IP address to represent a set of pods. Instead of providing a single virtual IP address for the service, a headless service creates a DNS record for each pod associated with the service. You can expose a VM through its FQDN without having to expose a specific TCP or UDP port. Important If you created a VM by using the OpenShift Container Platform web console, you can find its internal FQDN listed in the Network tile on the Overview tab of the VirtualMachine details page. For more information about connecting to the VM, see Connecting to a virtual machine by using its internal FQDN . 10.5.1. Creating a headless service in a project by using the CLI To create a headless service in a namespace, add the clusterIP: None parameter to the service YAML definition. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create a Service manifest to expose the VM, such as the following example: apiVersion: v1 kind: Service metadata: name: mysubdomain 1 spec: selector: expose: me 2 clusterIP: None 3 ports: 4 - protocol: TCP port: 1234 targetPort: 1234 1 The name of the service. This must match the spec.subdomain attribute in the VirtualMachine manifest file. 2 This service selector must match the expose:me label in the VirtualMachine manifest file. 3 Specifies a headless service. 4 The list of ports that are exposed by the service. You must define at least one port. This can be any arbitrary value as it does not affect the headless service. Save the Service manifest file. Create the service by running the following command: USD oc create -f headless_service.yaml 10.5.2. Mapping a virtual machine to a headless service by using the CLI To connect to a virtual machine (VM) from within the cluster by using its internal fully qualified domain name (FQDN), you must first map the VM to a headless service. Set the spec.hostname and spec.subdomain parameters in the VM configuration file. If a headless service exists with a name that matches the subdomain, a unique DNS A record is created for the VM in the form of <vm.spec.hostname>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local . Procedure Edit the VirtualMachine manifest to add the service selector label and subdomain by running the following command: USD oc edit vm <vm_name> Example VirtualMachine manifest file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: template: metadata: labels: expose: me 1 spec: hostname: "myvm" 2 subdomain: "mysubdomain" 3 # ... 1 The expose:me label must match the spec.selector attribute of the Service manifest that you previously created. 2 If this attribute is not specified, the resulting DNS A record takes the form of <vm.metadata.name>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local . 3 The spec.subdomain attribute must match the metadata.name value of the Service object. Save your changes and exit the editor. Restart the VM to apply the changes. 10.5.3. Connecting to a virtual machine by using its internal FQDN You can connect to a virtual machine (VM) by using its internal fully qualified domain name (FQDN). Prerequisites You have installed the virtctl tool. You have identified the internal FQDN of the VM from the web console or by mapping the VM to a headless service. The internal FQDN has the format <vm.spec.hostname>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local . Procedure Connect to the VM console by entering the following command: USD virtctl console vm-fedora To connect to the VM by using the requested FQDN, run the following command: USD ping myvm.mysubdomain.<namespace>.svc.cluster.local Example output PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms In the preceding example, the DNS entry for myvm.mysubdomain.default.svc.cluster.local points to 10.244.0.57 , which is the cluster IP address that is currently assigned to the VM. 10.5.4. Additional resources Exposing a VM by using a service 10.6. Connecting a virtual machine to a Linux bridge network By default, OpenShift Virtualization is installed with a single, internal pod network. You can create a Linux bridge network and attach a virtual machine (VM) to the network by performing the following steps: Create a Linux bridge node network configuration policy (NNCP) . Create a Linux bridge network attachment definition (NAD) by using the web console or the command line . Configure the VM to recognize the NAD by using the web console or the command line . Note OpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . 10.6.1. Creating a Linux bridge NNCP You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network. Prerequisites You have installed the Kubernetes NMState Operator. Procedure Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8 1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached. 10.6.2. Creating a Linux bridge NAD You can create a Linux bridge network attachment definition (NAD) by using the OpenShift Container Platform web console or command line. 10.6.2.1. Creating a Linux bridge NAD by using the web console You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console. A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Procedure In the web console, click Networking NetworkAttachmentDefinitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Select CNV Linux bridge from the Network Type list. Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . 10.6.2.2. Creating a Linux bridge NAD by using the command line You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines (VMs) by using the command line. The NAD and the VM must be in the same namespace. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Prerequisites The node must support nftables and the nft binary must be deployed to enable MAC spoof check. Procedure Add the VM to the NetworkAttachmentDefinition configuration, as in the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { "cniVersion": "0.3.1", "name": "bridge-network", 3 "type": "bridge", 4 "bridge": "br1", 5 "macspoofchk": false, 6 "vlan": 100, 7 "disableContainerInterface": true, "preserveDefaultVlan": false 8 } 1 The name for the NetworkAttachmentDefinition object. 2 Optional: Annotation key-value pair for node selection for the bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the defined bridge connected. 3 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 4 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI. 5 The name of the Linux bridge configured on the node. The name should match the interface bridge name defined in the NodeNetworkConfigurationPolicy manifest. 6 Optional: A flag to enable the MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. 7 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 8 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Create the network attachment definition: USD oc create -f network-attachment-definition.yaml 1 1 Where network-attachment-definition.yaml is the file name of the network attachment definition manifest. Verification Verify that the network attachment definition was created by running the following command: USD oc get network-attachment-definition bridge-network 10.6.3. Configuring a VM network interface You can configure a virtual machine (VM) network interface by using the OpenShift Container Platform web console or command line. 10.6.3.1. Configuring a VM network interface by using the web console You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You created a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name and select the network attachment definition from the Network list. Click Save . Restart the VM to apply the changes. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 10.6.3.2. Configuring a VM network interface by using the command line You can configure a virtual machine (VM) network interface for a bridge network by using the command line. Prerequisites Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect. Procedure Add the bridge interface and the network attachment definition to the VM configuration as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - bridge: {} name: bridge-net 1 # ... networks: - name: bridge-net 2 multus: networkName: a-bridge-network 3 1 The name of the bridge interface. 2 The name of the network. This value must match the name value of the corresponding spec.template.spec.domain.devices.interfaces entry. 3 The name of the network attachment definition. Apply the configuration: USD oc apply -f example-vm.yaml Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 10.7. Connecting a virtual machine to an SR-IOV network You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps: Configuring an SR-IOV network device Configuring an SR-IOV network Connecting the VM to the SR-IOV network 10.7.1. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. Reboot only happens in the following cases: With Mellanox NICs ( mlx5 driver) a node reboot happens every time the number of virtual functions (VFs) increase on a physical function (PF). With Intel NICs, a reboot only happens if the kernel parameters do not include intel_iommu=on and iommu=pt . It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14 1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name. 4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0 and 99 . A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99 . The default value is 99 . 6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 8 The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device. 9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3 . 10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b , 1015 , 1017 . 11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1 . 13 The vfio-pci driver type is required for virtual functions in OpenShift Virtualization. 14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false . The default value is false . Note If isRDMA flag is set to true , you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' 10.7.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: "<trust_vf>" 11 capabilities: <capabilities> 12 1 Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. 5 Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 7 Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable , disable and auto . 8 Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF. 9 Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 10 Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0 . 11 Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 12 Optional: Replace <capabilities> with the capabilities to configure for this network. To create the object, enter the following command. Replace <name> with a name for this additional network. USD oc create -f <name>-sriov-network.yaml Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 10.7.3. Connecting a virtual machine to an SR-IOV network by using the command line You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration. Procedure Add the SR-IOV network details to the spec.domain.devices.interfaces and spec.networks stanzas of the VM configuration as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: nic1 1 sriov: {} networks: - name: nic1 2 multus: networkName: sriov-network 3 # ... 1 Specify a unique name for the SR-IOV interface. 2 Specify the name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier. 3 Specify the name of the SR-IOV network attachment definition. Apply the virtual machine configuration: USD oc apply -f <vm_sriov>.yaml 1 1 The name of the virtual machine YAML file. 10.7.4. Connecting a VM to an SR-IOV network by using the web console You can connect a VM to the SR-IOV network by including the network details in the VM configuration. Prerequisites You must create a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name. Select an SR-IOV network attachment definition from the Network list. Select SR-IOV from the Type list. Optional: Add a network Model or Mac address . Click Save . Restart or live-migrate the VM to apply the changes. 10.7.5. Additional resources Configuring DPDK workloads for improved performance 10.8. Using DPDK with SR-IOV The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and virtual machines (VMs) to run DPDK workloads over SR-IOV networks. 10.8.1. Configuring a cluster for DPDK workloads You can configure an OpenShift Container Platform cluster to run Data Plane Development Kit (DPDK) workloads for improved network performance. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You have installed the Node Tuning Operator. Procedure Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS). If your OpenShift Container Platform cluster uses separate control plane and compute nodes for high-availability: Label a subset of the compute nodes with a custom role; for example, worker-dpdk : USD oc label node <node_name> node-role.kubernetes.io/worker-dpdk="" Create a new MachineConfigPool manifest that contains the worker-dpdk label in the spec.machineConfigSelector object: Example MachineConfigPool manifest apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: "" Create a PerformanceProfile manifest that applies to the labeled nodes and the machine config pool that you created in the steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping. Example PerformanceProfile manifest apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: "" numa: topologyPolicy: single-numa-node Note The compute nodes automatically restart after you apply the MachineConfigPool and PerformanceProfile manifests. Retrieve the name of the generated RuntimeClass resource from the status.runtimeClass field of the PerformanceProfile object: USD oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}' Set the previously obtained RuntimeClass name as the default container runtime class for the virt-launcher pods by editing the HyperConverged custom resource (CR): USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]' Note Editing the HyperConverged CR changes a global setting that affects all VMs that are created after the change is applied. If your DPDK-enabled compute nodes use Simultaneous multithreading (SMT), enable the AlignCPUs enabler by editing the HyperConverged CR: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "replace", "path": "/spec/featureGates/alignCPUs", "value": true}]' Note Enabling AlignCPUs allows OpenShift Virtualization to request up to two additional dedicated CPUs to bring the total CPU count to an even parity when using emulator thread isolation. Create an SriovNetworkNodePolicy object with the spec.deviceType field set to vfio-pci : Example SriovNetworkNodePolicy manifest apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: "8086" deviceID: "1572" pfNames: - eno3 rootDevices: - "0000:19:00.2" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" Additional resources Using CPU Manager and Topology Manager Configuring huge pages Creating a custom machine config pool 10.8.1.1. Removing a custom machine config pool for high-availability clusters You can delete a custom machine config pool that you previously created for your high-availability cluster. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created a custom machine config pool by labeling a subset of the compute nodes with a custom role and creating a MachineConfigPool manifest with that label. Procedure Remove the worker-dpdk label from the compute nodes by running the following command: USD oc label node <node_name> node-role.kubernetes.io/worker-dpdk- Delete the MachineConfigPool manifest that contains the worker-dpdk label by entering the following command: USD oc delete mcp worker-dpdk 10.8.2. Configuring a project for DPDK workloads You can configure the project to run DPDK workloads on SR-IOV hardware. Prerequisites Your cluster is configured to run DPDK workloads. Procedure Create a namespace for your DPDK applications: USD oc create ns dpdk-checkup-ns Create an SriovNetwork object that references the SriovNetworkNodePolicy object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Example SriovNetwork manifest apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: "off" trust: "on" vlan: 1019 1 The namespace where the NetworkAttachmentDefinition object is deployed. 2 The value of the spec.resourceName attribute of the SriovNetworkNodePolicy object that was created when configuring the cluster for DPDK workloads. Optional: Run the virtual machine latency checkup to verify that the network is properly configured. Optional: Run the DPDK checkup to verify that the namespace is ready for DPDK workloads. Additional resources Working with projects Virtual machine latency checkup DPDK checkup 10.8.3. Configuring a virtual machine for DPDK workloads You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing. Prerequisites Your cluster is configured to run DPDK workloads. You have created and configured the project in which the VM will run. Procedure Edit the VirtualMachine manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages: Example VirtualMachine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: runStrategy: Always template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east # ... 1 This annotation specifies that load balancing is disabled for CPUs that are used by the container. 2 This annotation specifies that the CPU quota is disabled for CPUs that are used by the container. 3 This annotation specifies that Interrupt Request (IRQ) load balancing is disabled for CPUs that are used by the container. 4 The number of sockets inside the VM. This field must be set to 1 for the CPUs to be scheduled from the same Non-Uniform Memory Access (NUMA) node. 5 The number of cores inside the VM. This must be a value greater than or equal to 1 . In this example, the VM is scheduled with 5 hyper-threads or 10 CPUs. 6 The size of the huge pages. The possible values for x86-64 architecture are 1Gi and 2Mi. In this example, the request is for 8 huge pages of size 1Gi. 7 The name of the SR-IOV NetworkAttachmentDefinition object. Save and exit the editor. Apply the VirtualMachine manifest: USD oc apply -f <file_name>.yaml Configure the guest operating system. The following example shows the configuration steps for RHEL 9 operating system: Configure huge pages by using the GRUB bootloader command-line interface. In the following example, 8 1G huge pages are specified. USD grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8" To achieve low-latency tuning by using the cpu-partitioning profile in the TuneD application, run the following commands: USD dnf install -y tuned-profiles-cpu-partitioning USD echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf The first two CPUs (0 and 1) are set aside for house keeping tasks and the rest are isolated for the DPDK application. USD tuned-adm profile cpu-partitioning Override the SR-IOV NIC driver by using the driverctl device driver control utility: USD dnf install -y driverctl USD driverctl set-override 0000:07:00.0 vfio-pci Restart the VM to apply the changes. 10.9. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a virtual machine (VM) to an OVN-Kubernetes secondary network. OpenShift Virtualization supports the layer2 and localnet topologies for OVN-Kubernetes. A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes. Note An OVN-Kubernetes secondary network is compatible with the multi-network policy API which provides the MultiNetworkPolicy custom resource definition (CRD) to control traffic flow to and from VMs. You can use the ipBlock attribute to define network policy ingress and egress rules for specific CIDR blocks. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD). Note For localnet topology, you must configure an OVS bridge by creating a NodeNetworkConfigurationPolicy object before creating the NAD. Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification. 10.9.1. Creating an OVN-Kubernetes NAD You can create an OVN-Kubernetes network attachment definition (NAD) by using the OpenShift Container Platform web console or the CLI. Note Configuring IP address management (IPAM) by specifying the spec.config.ipam.subnet attribute in a network attachment definition for virtual machines is not supported. 10.9.1.1. Creating a NAD for layer 2 topology using the CLI You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Create a NetworkAttachmentDefinition object: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { "cniVersion": "0.3.1", 1 "name": "my-namespace-l2-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology":"layer2", 4 "mtu": 1300, 5 "netAttachDefName": "my-namespace/l2-network" 6 } 1 The CNI specification version. The required value is 0.3.1 . 2 The name of the network. This attribute is not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinition objects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. 3 The name of the CNI plug-in to be configured. The required value is ovn-k8s-cni-overlay . 4 The topological configuration for the network. The required value is layer2 . 5 Optional: The maximum transmission unit (MTU) value. The default value is automatically set by the kernel. 6 The value of the namespace and name fields in the metadata stanza of the NetworkAttachmentDefinition object. Note The above example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address. Apply the manifest: USD oc apply -f <filename>.yaml 10.9.1.2. Creating a NAD for localnet topology using the CLI You can create a network attachment definition (NAD) which describes how to attach a pod to the underlying physical network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). You have installed the Kubernetes NMState Operator. Procedure Create a NodeNetworkConfigurationPolicy object to map the OVN-Kubernetes secondary network to an Open vSwitch (OVS) bridge: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet-network 3 bridge: br-ex 4 state: present 5 1 The name of the configuration object. 2 Specifies the nodes to which the node network configuration policy is to be applied. The recommended node selector value is node-role.kubernetes.io/worker: '' . 3 The name of the additional network from which traffic is forwarded to the OVS bridge. This attribute must match the value of the spec.config.name field of the NetworkAttachmentDefinition object that defines the OVN-Kubernetes additional network. 4 The name of the OVS bridge on the node. This value is required if the state attribute is present . 5 The state of the mapping. Must be either present to add the mapping or absent to remove the mapping. The default value is present . Note OpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . Create a NetworkAttachmentDefinition object: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |- { "cniVersion": "0.3.1", 1 "name": "localnet-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology": "localnet", 4 "netAttachDefName": "default/localnet-network" 5 } 1 The CNI specification version. The required value is 0.3.1 . 2 The name of the network. This attribute must match the value of the spec.desiredState.ovn.bridge-mappings.localnet field of the NodeNetworkConfigurationPolicy object that defines the OVS bridge mapping. 3 The name of the CNI plug-in to be configured. The required value is ovn-k8s-cni-overlay . 4 The topological configuration for the network. The required value is localnet . 5 The value of the namespace and name fields in the metadata stanza of the NetworkAttachmentDefinition object. Apply the manifest: USD oc apply -f <filename>.yaml 10.9.1.3. Creating a NAD for layer 2 topology by using the web console You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. Procedure Go to Networking NetworkAttachmentDefinitions in the web console. Click Create Network Attachment Definition . The network attachment definition must be in the same namespace as the pod or virtual machine using it. Enter a unique Name and optional Description . Select OVN Kubernetes L2 overlay network from the Network Type list. Click Create . 10.9.1.4. Creating a NAD for localnet topology using the web console You can create a network attachment definition (NAD) to connect workloads to a physical network by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with cluster-admin privileges. Use nmstate to configure the localnet to OVS bridge mappings. Procedure Navigate to Networking NetworkAttachmentDefinitions in the web console. Click Create Network Attachment Definition . The network attachment definition must be in the same namespace as the pod or virtual machine using it. Enter a unique Name and optional Description . Select OVN Kubernetes secondary localnet network from the Network Type list. Enter the name of your pre-configured localnet identifier in the Bridge mapping field. Optional: You can explicitly set MTU to the specified value. The default value is chosen by the kernel. Optional: Encapsulate the traffic in a VLAN. The default value is none. Click Create . 10.9.2. Attaching a virtual machine to the OVN-Kubernetes secondary network You can attach a virtual machine (VM) to the OVN-Kubernetes secondary network interface by using the OpenShift Container Platform web console or the CLI. 10.9.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Edit the VirtualMachine manifest to add the OVN-Kubernetes secondary network interface details, as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: runStrategy: Always template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4 # ... 1 The name of the OVN-Kubernetes secondary interface. 2 The name of the network. This must match the value of the spec.template.spec.domain.devices.interfaces.name field. 3 The name of the NetworkAttachmentDefinition object. 4 Specifies the nodes on which the VM can be scheduled. The recommended node selector value is node-role.kubernetes.io/worker: '' . Apply the VirtualMachine manifest: USD oc apply -f <filename>.yaml Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 10.9.3. Additional resources Creating secondary networks on OVN-Kubernetes About the Kubernetes NMState Operator Creating primary networks using a NetworkAttachmentDefinition 10.10. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. OpenShift Virtualization also supports hot plugging secondary interfaces that use SR-IOV binding. Note Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces. 10.10.1. VirtIO limitations Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces. Note The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation . If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces. 10.10.2. Hot plugging a secondary network interface by using the CLI Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. Prerequisites A network attachment definition is configured in the same namespace as your VM. You have installed the virtctl tool. You have installed the OpenShift CLI ( oc ). Procedure If the VM to which you want to hot plug the network interface is not running, start it by using the following command: USD virtctl start <vm_name> -n <namespace> Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. USD oc edit vm <vm_name> Example VM configuration apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3 # ... 1 Specifies the name of the new network interface. 2 Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. 3 Specifies the name of the NetworkAttachmentDefinition object. To attach the network interface to the running VM, live migrate the VM by running the following command: USD virtctl migrate <vm_name> Verification Verify that the VM live migration is successful by using the following command: USD oc get VirtualMachineInstanceMigration -w Example output NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora Verify that the new interface is added to the VM by checking the VMI status: USD oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }" Example output [ { "infoSource": "domain, guest-agent", "interfaceName": "eth0", "ipAddress": "10.130.0.195", "ipAddresses": [ "10.130.0.195", "fd02:0:0:3::43c" ], "mac": "52:54:00:0e:ab:25", "name": "default", "queueCount": 1 }, { "infoSource": "domain, guest-agent, multus-status", "interfaceName": "eth1", "mac": "02:d8:b8:00:00:2a", "name": "bridge-interface", 1 "queueCount": 1 } ] 1 The hot plugged interface appears in the VMI status. 10.10.3. Hot unplugging a secondary network interface by using the CLI You can remove a secondary network interface from a running virtual machine (VM). Note Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces. Prerequisites Your VM must be running. The VM must be created on a cluster running OpenShift Virtualization 4.14 or later. The VM must have a bridge network interface attached. Procedure Edit the VM specification to hot unplug a secondary network interface. Setting the interface state to absent detaches the network interface from the guest, but the interface still exists in the pod. USD oc edit vm <vm_name> Example VM configuration apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name> # ... 1 Set the interface state to absent to detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface. Remove the interface from the pod by migrating the VM: USD virtctl migrate <vm_name> 10.10.4. Additional resources Installing virtctl Creating a Linux bridge network attachment definition Connecting a virtual machine to a Linux bridge network Creating an SR-IOV network attachment definition Connecting a virtual machine to an SR-IOV network 10.11. Connecting a virtual machine to a service mesh OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4. 10.11.1. Adding a virtual machine to a service mesh To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true . Then expose your VM as a service to view your application in the mesh. Important To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090. Prerequisites You installed the Service Mesh Operators. You created the Service Mesh control plane. You added the VM project to the Service Mesh member roll. Procedure Edit the VM configuration file to add the sidecar.istio.io/inject: "true" annotation: Example configuration file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: "true" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk 1 The key/value pair (label) that must be matched to the service selector attribute. 2 The annotation to enable automatic sidecar injection. 3 The binding method (masquerade mode) for use with the default pod network. Apply the VM configuration: USD oc apply -f <vm_name>.yaml 1 1 The name of the virtual machine YAML file. Create a Service object to expose your VM to the service mesh. apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP 1 The service selector that determines the set of pods targeted by a service. This attribute corresponds to the spec.metadata.labels field in the VM configuration file. In the above example, the Service object named vm-istio targets TCP port 8080 on any pod with the label app=vm-istio . Create the service: USD oc create -f <service_name>.yaml 1 1 The name of the service YAML file. 10.11.2. Additional resources Installing the Service Mesh Operators Creating the Service Mesh control plane Adding projects to the Service Mesh member roll 10.12. Configuring a dedicated network for live migration You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 10.12.1. Configuring a dedicated secondary network for live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. Each node has at least two Network Interface Cards (NICs). The NICs for live migration are connected to the same VLAN. Procedure Create a NetworkAttachmentDefinition manifest according to the following example: Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }' 1 Specify the name of the NetworkAttachmentDefinition object. 2 Specify the name of the NIC to be used for live migration. 3 Specify the name of the CNI plugin that provides the network for the NAD. 4 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR: Example HyperConverged manifest apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ... 1 Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. USD oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 10.12.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the OpenShift Container Platform web console. Prerequisites You configured a Multus network for live migration. You created a network attachment definition for the network. Procedure Navigate to Virtualization > Overview in the OpenShift Container Platform web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 10.12.3. Additional resources Configuring live migration limits and timeouts 10.13. Configuring and viewing IP addresses You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 10.13.1. Configuring IP addresses for virtual machines You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line. You can configure a dynamic IP address when you create a VM by using the command line. The IP address is provisioned with cloud-init. 10.13.1.1. Configuring an IP address when creating a virtual machine by using the command line You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. Note If the VM is connected to the pod network, the pod network interface is the default route unless you update it. Prerequisites The virtual machine is connected to a secondary network. You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine. Procedure Edit the spec.template.spec.volumes.cloudInitNoCloud.networkData stanza of the virtual machine configuration: To configure a dynamic IP address, specify the interface name and enable DHCP: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 1 Specify the interface name. To configure a static IP, specify the interface name and the IP address: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2 1 Specify the interface name. 2 Specify the static IP address. 10.13.2. Viewing IP addresses of virtual machines You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 10.13.2.1. Viewing the IP address of a virtual machine by using the web console You can view the IP address of a virtual machine (VM) by using the OpenShift Container Platform web console. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a VM to open the VirtualMachine details page. Click the Details tab to view the IP address. 10.13.2.2. Viewing the IP address of a virtual machine by using the command line You can view the IP address of a virtual machine (VM) by using the command line. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure Obtain the virtual machine instance configuration by running the following command: USD oc describe vmi <vmi_name> Example output # ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa 10.13.3. Additional resources Installing the QEMU guest agent 10.14. Accessing a virtual machine by using its external FQDN You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN). Important Accessing a VM from outside the cluster by using its FQDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 10.14.1. Configuring a DNS server for secondary networks The Cluster Network Addons Operator (CNAO) deploys a Domain Name Server (DNS) server and monitoring components when you enable the deployKubeSecondaryDNS feature gate in the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You configured a load balancer for the cluster. You logged in to the cluster with cluster-admin permissions. Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Enable the DNS server and monitoring components according to the following example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true 1 # ... 1 Enables the DNS server Save the file and exit the editor. Create a load balancer service to expose the DNS server outside the cluster by running the oc expose command according to the following example: USD oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb \ --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP' Retrieve the external IP address by running the following command: USD oc get service -n openshift-cnv Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s Edit the HyperConverged CR again: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the external IP address that you previously retrieved to the kubeSecondaryDNSNameServerIP field in the enterprise DNS server records. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: "10.46.41.94" 1 # ... 1 Specify the external IP address exposed by the load balancer service. Save the file and exit the editor. Retrieve the cluster FQDN by running the following command: USD oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}' Example output openshift.example.com Point to the DNS server. To do so, add the kubeSecondaryDNSNameServerIP value and the cluster FQDN to the enterprise DNS server records. For example: vm.<FQDN>. IN NS ns.vm.<FQDN>. ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP> 10.14.2. Connecting to a VM on a secondary network by using the cluster FQDN You can access a running virtual machine (VM) attached to a secondary network interface by using the fully qualified domain name (FQDN) of the cluster. Prerequisites You installed the QEMU guest agent on the VM. The IP address of the VM is public. You configured the DNS server for secondary networks. You retrieved the fully qualified domain name (FQDN) of the cluster. To obtain the FQDN, use the oc get command as follows: USD oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain Procedure Retrieve the network interface name from the VM configuration by running the following command: USD oc get vm -n <namespace> <vm_name> -o yaml Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Always template: spec: domain: devices: interfaces: - bridge: {} name: example-nic # ... networks: - multus: networkName: bridge-conf name: example-nic 1 1 Note the name of the network interface. Connect to the VM by using the ssh command: USD ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn> 10.14.3. Additional resources Configuring ingress cluster traffic using a load balancer About MetalLB and the MetalLB Operator Configuring IP addresses for virtual machines 10.15. Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. Note KubeMacPool does not handle virtual machine instances created independently from a virtual machine. 10.15.1. Managing KubeMacPool by using the command line You can disable and re-enable KubeMacPool by using the command line. KubeMacPool is enabled by default. Procedure To disable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore To re-enable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io- | [
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}",
"oc create -f <vm-name>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4",
"oc create -f example-vm-ipv6.yaml",
"oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"",
"apiVersion: v1 kind: Namespace metadata: name: udn_namespace labels: k8s.ovn.org/primary-user-defined-network: \"\" 1",
"apply -f <filename>.yaml",
"apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-l2-net 1 namespace: my-namespace 2 spec: topology: Layer2 3 layer2: role: Primary 4 subnets: - \"10.0.0.0/24\" - \"2001:db8::/60\" ipam: lifecycle: Persistent 5",
"oc apply -f --validate=true <filename>.yaml",
"kind: ClusterUserDefinedNetwork metadata: name: cudn-l2-net 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name operator: In 4 values: [\"red-namespace\", \"blue-namespace\"] network: topology: Layer2 5 layer2: role: Primary 6 ipam: lifecycle: Persistent subnets: - 203.203.0.0/16",
"oc apply -f --validate=true <filename>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: my-namespace 1 spec: template: spec: domain: devices: interfaces: - name: udn-l2-net 2 binding: name: l2bridge 3 networks: - name: udn-l2-net 4 pod: {}",
"oc apply -f <filename>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000",
"oc create -f example-service.yaml",
"oc get service -n example-namespace",
"apiVersion: v1 kind: Service metadata: name: mysubdomain 1 spec: selector: expose: me 2 clusterIP: None 3 ports: 4 - protocol: TCP port: 1234 targetPort: 1234",
"oc create -f headless_service.yaml",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: template: metadata: labels: expose: me 1 spec: hostname: \"myvm\" 2 subdomain: \"mysubdomain\" 3",
"virtctl console vm-fedora",
"ping myvm.mysubdomain.<namespace>.svc.cluster.local",
"PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"bridge-network\", 3 \"type\": \"bridge\", 4 \"bridge\": \"br1\", 5 \"macspoofchk\": false, 6 \"vlan\": 100, 7 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 8 }",
"oc create -f network-attachment-definition.yaml 1",
"oc get network-attachment-definition bridge-network",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - bridge: {} name: bridge-net 1 networks: - name: bridge-net 2 multus: networkName: a-bridge-network 3",
"oc apply -f example-vm.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12",
"oc create -f <name>-sriov-network.yaml",
"oc get net-attach-def -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: nic1 1 sriov: {} networks: - name: nic1 2 multus: networkName: sriov-network 3",
"oc apply -f <vm_sriov>.yaml 1",
"oc label node <node_name> node-role.kubernetes.io/worker-dpdk=\"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: \"\" numa: topologyPolicy: single-numa-node",
"oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{\"\\n\"}'",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/defaultRuntimeClass\", \"value\":\"<runtimeclass-name>\"}]'",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/featureGates/alignCPUs\", \"value\": true}]'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: \"8086\" deviceID: \"1572\" pfNames: - eno3 rootDevices: - \"0000:19:00.2\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"",
"oc label node <node_name> node-role.kubernetes.io/worker-dpdk-",
"oc delete mcp worker-dpdk",
"oc create ns dpdk-checkup-ns",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: \"off\" trust: \"on\" vlan: 1019",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: runStrategy: Always template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east",
"oc apply -f <file_name>.yaml",
"grubby --update-kernel=ALL --args=\"default_hugepagesz=1GB hugepagesz=1G hugepages=8\"",
"dnf install -y tuned-profiles-cpu-partitioning",
"echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf",
"tuned-adm profile cpu-partitioning",
"dnf install -y driverctl",
"driverctl set-override 0000:07:00.0 vfio-pci",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }",
"oc apply -f <filename>.yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet-network 3 bridge: br-ex 4 state: present 5",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"localnet-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\": \"localnet\", 4 \"netAttachDefName\": \"default/localnet-network\" 5 }",
"oc apply -f <filename>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: runStrategy: Always template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4",
"oc apply -f <filename>.yaml",
"virtctl start <vm_name> -n <namespace>",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3",
"virtctl migrate <vm_name>",
"oc get VirtualMachineInstanceMigration -w",
"NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora",
"oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"",
"[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name>",
"virtctl migrate <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk",
"oc apply -f <vm_name>.yaml 1",
"apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP",
"oc create -f <service_name>.yaml 1",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'",
"kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true",
"kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2",
"oc describe vmi <vmi_name>",
"Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true 1",
"oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'",
"oc get service -n openshift-cnv",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: \"10.46.41.94\" 1",
"oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'",
"openshift.example.com",
"vm.<FQDN>. IN NS ns.vm.<FQDN>.",
"ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>",
"oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain",
"oc get vm -n <namespace> <vm_name> -o yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Always template: spec: domain: devices: interfaces: - bridge: {} name: example-nic networks: - multus: networkName: bridge-conf name: example-nic 1",
"ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/virtualization/networking |
probe::udp.recvmsg.return | probe::udp.recvmsg.return Name probe::udp.recvmsg.return - Fires whenever an attempt to receive a UDP message received is completed Synopsis Values name The name of this probe size Number of bytes received by the process Context The process which received a UDP message | [
"udp.recvmsg.return"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-udp-recvmsg-return |
probe::linuxmib.TCPMemoryPressures | probe::linuxmib.TCPMemoryPressures Name probe::linuxmib.TCPMemoryPressures - Count of times memory pressure was used Synopsis linuxmib.TCPMemoryPressures Values sk Pointer to the struct sock being acted on op Value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function linuxmib_filter_key . If the packet passes the filter is is counted in the global TCPMemoryPressures (equivalent to SNMP's MIB LINUX_MIB_TCPMEMORYPRESSURES) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-linuxmib-tcpmemorypressures |
Appendix A. Using your subscription | Appendix A. Using your subscription Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired Streams for Apache Kafka product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2025-03-05 17:09:40 UTC | [
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/developing_kafka_client_applications/using_your_subscription |
Part IV. Designing a decision service using guided decision tables | Part IV. Designing a decision service using guided decision tables As a business analyst or business rules developer, you can use guided decision tables to define business rules in a wizard-led tabular format. These rules are compiled into Drools Rule Language (DRL) and form the core of the decision service for your project. Note You can also design your decision service using Decision Model and Notation (DMN) models instead of rule-based or table-based assets. For information about DMN support in Red Hat Decision Manager 7.13, see the following resources: Getting started with decision services (step-by-step tutorial with a DMN decision service example) Designing a decision service using DMN models (overview of DMN support and capabilities in Red Hat Decision Manager) Prerequisites The space and project for the guided decision tables have been created in Business Central. Each asset is associated with a project assigned to a space. For details, see Getting started with decision services . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/assembly-guided-decision-tables |
Chapter 6. Applying autoscaling to an OpenShift Container Platform cluster | Chapter 6. Applying autoscaling to an OpenShift Container Platform cluster Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster. Important You can configure the cluster autoscaler only in clusters where the machine API is operational. 6.1. About the cluster autoscaler The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources. Important Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to. Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply: The node utilization is less than the node utilization level threshold for the cluster. The node utilization level is the sum of the requested resources divided by the allocated resources for the node. If you do not specify a value in the ClusterAutoscaler custom resource, the cluster autoscaler uses a default value of 0.5 , which corresponds to 50% utilization. The cluster autoscaler can move all pods running on the node to the other nodes. The Kubernetes scheduler is responsible for scheduling pods on the nodes. The cluster autoscaler does not have scale down disabled annotation. If the following types of pods are present on a node, the cluster autoscaler will not remove the node: Pods with restrictive pod disruption budgets (PDBs). Kube-system pods that do not run on the node by default. Kube-system pods that do not have a PDB or have a PDB that is too restrictive. Pods that are not backed by a controller object such as a deployment, replica set, or stateful set. Pods with local storage. Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation. For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62. If you configure the cluster autoscaler, additional usage restrictions apply: Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods. Specify requests for your pods. If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. Do not run additional node group autoscalers, especially the ones offered by your cloud provider. The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes. The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources. Cluster autoscaling is supported for the platforms that have machine API available on it. 6.2. Configuring the cluster autoscaler First, deploy the cluster autoscaler to manage automatic resource scaling in your OpenShift Container Platform cluster. Note Because the cluster autoscaler is scoped to the entire cluster, you can make only one cluster autoscaler for the cluster. 6.2.1. ClusterAutoscaler resource definition This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler. apiVersion: "autoscaling.openshift.io/v1" kind: "ClusterAutoscaler" metadata: name: "default" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: "0.4" 16 1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. 2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. 3 Specify the minimum number of cores to deploy in the cluster. 4 Specify the maximum number of cores to deploy in the cluster. 5 Specify the minimum amount of memory, in GiB, in the cluster. 6 Specify the maximum amount of memory, in GiB, in the cluster. 7 Optional: Specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types. 8 Specify the minimum number of GPUs to deploy in the cluster. 9 Specify the maximum number of GPUs to deploy in the cluster. 10 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . 11 Specify whether the cluster autoscaler can remove unnecessary nodes. 12 Optional: Specify the period to wait before deleting a node after a node has recently been added . If you do not specify a value, the default value of 10m is used. 13 Optional: Specify the period to wait before deleting a node after a node has recently been deleted . If you do not specify a value, the default value of 0s is used. 14 Optional: Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. 15 Optional: Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. 16 Optional: Specify the node utilization level below which an unnecessary node is eligible for deletion. The node utilization level is the sum of the requested resources divided by the allocated resources for the node, and must be a value greater than "0" but less than "1" . If you do not specify a value, the cluster autoscaler uses a default value of "0.5" , which corresponds to 50% utilization. This value must be expressed as a string. Note When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges. The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. 6.2.2. Deploying the cluster autoscaler To deploy the cluster autoscaler, you create an instance of the ClusterAutoscaler resource. Procedure Create a YAML file for the ClusterAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 6.3. steps After you configure the cluster autoscaler, you must configure at least one machine autoscaler. 6.4. About the machine autoscaler The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target. Important You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. 6.5. Configuring the machine autoscalers After you deploy the cluster autoscaler, deploy MachineAutoscaler resources that reference the machine sets that are used to scale the cluster. Important You must deploy at least one MachineAutoscaler resource after you deploy the ClusterAutoscaler resource. Note You must configure separate resources for each machine set. Remember that machine sets are different in each region, so consider whether you want to enable machine scaling in multiple regions. The machine set that you scale must have at least one machine in it. 6.5.1. MachineAutoscaler resource definition This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler. apiVersion: "autoscaling.openshift.io/v1beta1" kind: "MachineAutoscaler" metadata: name: "worker-us-east-1a" 1 namespace: "openshift-machine-api" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6 1 Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: <clusterid>-<machineset>-<region> . 2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, RHOSP, or vSphere, this value can be set to 0 . For other providers, do not set this value to 0 . You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a machine set with extra large machines. The cluster autoscaler scales the machine set down to zero if the machines are not in use. Important Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure. 3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. 4 In this section, provide values that describe the existing machine set to scale. 5 The kind parameter value is always MachineSet . 6 The name value must match the name of an existing machine set, as shown in the metadata.name parameter value. 6.5.2. Deploying the machine autoscaler To deploy the machine autoscaler, you create an instance of the MachineAutoscaler resource. Procedure Create a YAML file for the MachineAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 6.6. Additional resources For more information about pod priority, see Including pod priority in pod scheduling decisions in OpenShift Container Platform . | [
"apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: \"0.4\" 16",
"oc create -f <filename>.yaml 1",
"apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6",
"oc create -f <filename>.yaml 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/machine_management/applying-autoscaling |
Chapter 23. Removing storage devices | Chapter 23. Removing storage devices You can safely remove a storage device from a running system, which helps prevent system memory overload and data loss. Do not remove a storage device on a system where: Free memory is less than 5% of the total memory in more than 10 samples per 100. Swapping is active (non-zero si and so columns in the vmstat command output). Prerequisites Before you remove a storage device, ensure that you have enough free system memory due to the increased system memory load during an I/O flush. Use the following commands to view the current memory load and free memory of the system: 23.1. Safe removal of storage devices Safely removing a storage device from a running system requires a top-to-bottom approach. Start from the top layer, which typically is an application or a file system, and work towards the bottom layer, which is the physical device. You can use storage devices in multiple ways, and they can have different virtual configurations on top of physical devices. For example, you can group multiple instances of a device into a multipath device, make it part of a RAID, or you can make it part of an LVM group. Additionally, devices can be accessed via a file system, or they can be accessed directly such as a "raw" device. While using the top-to-bottom approach, you must ensure that: the device that you want to remove is not in use all pending I/O to the device is flushed the operating system is not referencing the storage device 23.2. Removing block devices and associated metadata To safely remove a block device from a running system, to help prevent system memory overload and data loss you need to first remove metadata from them. Address each layer in the stack, starting with the file system, and proceed to the disk. These actions prevent putting your system into an inconsistent state. Use specific commands that may vary depending on what type of devices you are removing: lvremove , vgremove and pvremove are specific to LVM. For software RAID, run mdadm to remove the array. For more information, see Managing RAID . For block devices encrypted using LUKS, there are specific additional steps. The following procedure will not work for the block devices encrypted using LUKS. For more information, see Encrypting block devices using LUKS . Warning Rescanning the SCSI bus or performing any other action that changes the state of the operating system, without following the procedure documented here can cause delays due to I/O timeouts, devices to be removed unexpectedly, or data loss. Prerequisites You have an existing block device stack containing the file system, the logical volume, and the volume group. You ensured that no other applications or services are using the device that you want to remove. You backed up the data from the device that you want to remove. Optional: If you want to remove a multipath device, and you are unable to access its path devices, disable queueing of the multipath device by running the following command: This enables the I/O of the device to fail, allowing the applications that are using the device to shut down. Note Removing devices with their metadata one layer at a time ensures no stale signatures remain on the disk. Procedure Unmount the file system: Remove the file system: If you have added an entry into the /etc/fstab file to make a persistent association between the file system and a mount point, edit /etc/fstab at this point to remove that entry. Continue with the following steps, depending on the type of the device you want to remove: Remove the logical volume (LV) that contained the file system: If there are no other logical volumes remaining in the volume group (VG), you can safely remove the VG that contained the device: Remove the physical volume (PV) metadata from the PV device(s): Remove the partitions that contained the PVs: Remove the partition table if you want to fully wipe the device: Execute the following steps only if you want to physically remove the device: If you are removing a multipath device, execute the following commands: View all the paths to the device: The output of this command is required in a later step. Flush the I/O and remove the multipath device: If the device is not configured as a multipath device, or if the device is configured as a multipath device and you have previously passed I/O to the individual paths, flush any outstanding I/O to all device paths that are used: This is important for devices accessed directly where the umount or vgreduce commands do not flush the I/O. If you are removing a SCSI device, execute the following commands: Remove any reference to the path-based name of the device, such as /dev/sd , /dev/disk/by-path , or the major:minor number, in applications, scripts, or utilities on the system. This ensures that different devices added in the future are not mistaken for the current device. Remove each path to the device from the SCSI subsystem: Here the device-name is retrieved from the output of the multipath -l command, if the device was previously used as a multipath device. Remove the physical device from a running system. Note that the I/O to other devices does not stop when you remove this device. Verification Verify that the devices you intended to remove are not displaying on the output of lsblk command. The following is an example output: Additional resources multipath(8) , pvremove(8) , vgremove(8) , lvremove(8) , wipefs(8) , parted(8) , blockdev(8) and umount(8) man pages on your system | [
"vmstat 1 100 free",
"multipathd disablequeueing map multipath-device",
"umount /mnt/mount-point",
"wipefs -a /dev/vg0/myvol",
"lvremove vg0/myvol",
"vgremove vg0",
"pvremove /dev/sdc1",
"wipefs -a /dev/sdc1",
"parted /dev/sdc rm 1",
"wipefs -a /dev/sdc",
"multipath -l",
"multipath -f multipath-device",
"blockdev --flushbufs device",
"echo 1 > /sys/block/ device-name /device/delete",
"lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 5G 0 disk sr0 11:0 1 1024M 0 rom vda 252:0 0 10G 0 disk |-vda1 252:1 0 1M 0 part |-vda2 252:2 0 100M 0 part /boot/efi `-vda3 252:3 0 9.9G 0 part /"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/removing-storage-devices_managing-storage-devices |
4.223. perl-NetAddr-IP | 4.223. perl-NetAddr-IP 4.223.1. RHEA-2011:0873 - perl-NetAddr-IP bug fix update An updated perl-NetAddr-IP package that fixes one bug is now available for Red Hat Enterprise Linux 6. The perl-NetAddr-IP module provides an object-oriented abstraction on top of IP addresses or IP subnets, that allows for easy manipulations. Bug Fix BZ# 692857 Prior to this update, the documentation included in the perl-NetAddr-IP module did not contain a correct description with regard to the addition of a constant to an IP address. The problem has been resolved in this update by correcting the respective part of the documentation. All users of perl-NetAddr-IP are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/perl-netaddr-ip |
function::inet_get_ip_source | function::inet_get_ip_source Name function::inet_get_ip_source - Provide IP source address string for a kernel socket Synopsis Arguments sock pointer to the kernel socket | [
"inet_get_ip_source:string(sock:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-inet-get-ip-source |
Chapter 170. JCache Component | Chapter 170. JCache Component Available as of Camel version 2.17 The jcache component enables you to perform caching operations using JSR107/JCache as cache implementation. 170.1. URI Format jcache:cacheName[?options] 170.2. URI Options The JCache endpoint is configured using URI syntax: with the following path and query parameters: 170.2.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 170.2.2. Query Parameters (22 parameters): Name Description Default Type cacheConfiguration (common) A Configuration for the Cache Configuration cacheConfigurationProperties (common) The Properties for the javax.cache.spi.CachingProvider to create the CacheManager Properties cachingProvider (common) The fully qualified class name of the javax.cache.spi.CachingProvider String configurationUri (common) An implementation specific URI for the CacheManager String managementEnabled (common) Whether management gathering is enabled false boolean readThrough (common) If read-through caching should be used false boolean statisticsEnabled (common) Whether statistics gathering is enabled false boolean storeByValue (common) If cache should use store-by-value or store-by-reference semantics true boolean writeThrough (common) If write-through caching should be used false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean filteredEvents (consumer) Events a consumer should filter. If using filteredEvents option, then eventFilters one will be ignored List oldValueRequired (consumer) if the old value is required for events false boolean synchronous (consumer) if the event listener should block the thread causing the event false boolean eventFilters (consumer) The CacheEntryEventFilter. If using eventFilters option, then filteredEvents one will be ignored List exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern action (producer) To configure using a cache operation by default. If an operation in the message header, then the operation from the header takes precedence. String cacheLoaderFactory (advanced) The CacheLoader factory Factory cacheWriterFactory (advanced) The CacheWriter factory Factory createCacheIfNotExists (advanced) Configure if a cache need to be created if it does exist or can't be pre-configured. true boolean expiryPolicyFactory (advanced) The ExpiryPolicy factory Factory lookupProviders (advanced) Configure if a camel-cache should try to find implementations of jcache api in runtimes like OSGi. false boolean 170.3. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.jcache.cache-configuration A Configuration for the Cache. The option is a javax.cache.configuration.Configuration type. String camel.component.jcache.cache-configuration-properties The Properties for the javax.cache.spi.CachingProvider to create the CacheManager. The option is a java.util.Properties type. String camel.component.jcache.caching-provider The fully qualified class name of the javax.cache.spi.CachingProvider String camel.component.jcache.configuration-uri An implementation specific URI for the CacheManager String camel.component.jcache.enabled Enable jcache component true Boolean camel.component.jcache.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean The JCache component supports 5 options, which are listed below. Name Description Default Type cachingProvider (common) The fully qualified class name of the javax.cache.spi.CachingProvider String cacheConfiguration (common) A Configuration for the Cache Configuration cacheConfiguration Properties (common) The Properties for the javax.cache.spi.CachingProvider to create the CacheManager Properties configurationUri (common) An implementation specific URI for the CacheManager String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean | [
"jcache:cacheName[?options]",
"jcache:cacheName"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jcache-component |
8.157. powertop | 8.157. powertop 8.157.1. RHBA-2013:1575 - powertop bug fix and enhancement update Updated powertop packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. PowerTOP is a tool to detect all the software components that make a computer consume more than necessary power when idle. PowerTOP can be used to reduce power usage by running various commands on the system. Note The powertop package has been upgraded to upstream version 2.3, which provides a number of bug fixes and enhancements over the version including corrected handling of arbitrary interface names. Moreover, checks for usability of the ondemand governor have been added. Also, several changes have been made to enhance user experience and to simplify and improve power management profiling capabilities. (BZ# 682378 , BZ# 697273 , BZ# 829800 ) Bug Fix BZ# 998021 The default soft limit for per-process open file descriptors is 1024, and the default hard limit for per-process file descriptors is 4096. By using the performance counter subsystem, the PowerTOP tool could exceed the limit on complex systems. Consequently, an error message about missing kernel support for perf was displayed. This update adds a fix that temporarily increases both the soft and hard file descriptor limits for the current process to the kernel limit. If the kernel limit is still insufficient, PowerTOP now displays an error message indicating that the file descriptor limits should be manually increased. Users of powertop are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/powertop |
Appendix A. Terminology and commands | Appendix A. Terminology and commands Learn more about the rpm ostree terminology and commands. A.1. OSTree and rpm-ostree terminology Following are some helpful terms that are used in context to OSTree and rpm-ostree images. Table A.1. OSTree and rpm-ostree terminology Term Definition OSTree A tool used for managing Linux-based operating system versions. The OSTree tree view is similar to Git and is based on similar concepts. rpm-ostree A hybrid image or system package that hosts operating system updates. Commit A release or image version of the operating system. RHEL image builder generates an OSTree commit for RHEL for Edge images. You can use these images to install or update RHEL on Edge servers. Refs Represents a branch in OSTree. Refs always resolve to the latest commit. For example, rhel/9/x86_64/edge . Revision (Rev) SHA-256 for a specific commit. Remote The http or https endpoint that hosts the OSTree content. This is analogous to the baseurl for a dnf repository. static-delta Updates to OSTree images are always delta updates. In case of RHEL for Edge images, the TCP overhead can be higher than expected due to the updates to number of files. To avoid TCP overhead, you can generate static-delta between specific commits, and send the update in a single connection. This optimization helps large deployments with constrained connectivity. A.2. OSTree commands The following table provides a few OSTree commands that you can use when installing or managing OSTree images. Table A.2. ostree commands ostree pull ostree pull-local --repo [path] src ostree pull-local <path> <rev> --repo=<repo-path> ostree pull <URL> <rev> --repo=<repo-path> ostree summary ostree summary -u --repo=<repo-path> View refs ostree refs --repo ~/Code/src/osbuild-iot/build/repo/ --list View commits in repo ostree log --repo=/home/gicmo/Code/src/osbuild-iot/build/repo/ <REV> Inspect a commit ostree show --repo build/repo <REV> List remotes of a repo ostree remote list --repo <repo-path> Resolve a REV ostree rev-parse --repo ~/Code/src/osbuild-iot/build/repo fedora/x86_64/osbuild-demo ostree rev-parse --repo ~/Code/src/osbuild-iot/build/repo b3a008eceeddd0cfd Create static-delta ostree static-delta generate --repo=[path] --from=REV --to=REV Sign an existing ostree commit with a GPG key ostree gpg-sign --repo=<repo-path> --gpg-homedir <gpg_home> COMMIT KEY-ID... A.3. rpm-ostree commands The following table provides a few rpm-ostree commands that you can use when installing or managing OSTree images. Table A.3. rpm-ostree commands Commands Description rpm-ostree --repo=/home/gicmo/Code/src/osbuild-iot/build/repo/ db list <REV> This command lists the packages existing in the <REV> commit into the repository. rpm-ostree rollback OSTree manages an ordered list of boot loader entries, called deployments . The entry at index 0 is the default boot loader entry. Each entry has a separate /etc directory, but all the entries share a single /var directory. You can use the boot loader to choose between entries by pressing Tab to interrupt startup. This rolls back to the state, that is, the default deployment changes places with the non-default one. rpm-ostree status This command gives information about the current deployment in use. Lists the names and refspecs of all possible deployments in order, such that the first deployment in the list is the default upon boot. The deployment marked with * is the current booted deployment, and marking with 'r' indicates the most recent upgrade. rpm-ostree db list Use this command to see which packages are within the commit or commits. You must specify at least one commit, but more than one or a range of commits also work. rpm-ostree db diff Use this command to show how the packages are different between the trees in two revs (revisions). If no revs are provided, the booted commit is compared to the pending commit. If only a single rev is provided, the booted commit is compared to that rev. rpm-ostree upgrade This command downloads the latest version of the current tree, and deploys it, setting up the current tree as the default for the boot. This has no effect on your running filesystem tree. You must reboot for any changes to take effect. Additional resources rpm-ostree man page on your system A.4. FDO automatic onboarding terminology Learn more about the FDO terminology. Table A.4. FDO terminology Commands Description FDO FIDO Device Onboarding. Device Any hardware, device, or computer. Owner The final owner of the device - a company or an IT department. Manufacturer The device manufacturer. Manufacturer server Creates the device credentials for the device. Manufacturer client Informs the location of the manufacturing server. Ownership Voucher (OV) Record of ownership of an individual device. Contains the following information: * Owner ( fdo-owner-onboarding-service ) * Rendezvous Server - FIDO server ( fdo-rendezvous-server ) * Device (at least one combination) ( fdo-manufacturing-service ) Device Credential (DC) Key credential and rendezvous stored in the device at manufacture. Keys Keys to configure the manufacturing server * key_path * cert_path * key_type * mfg_string_type: device serial number * allowed_key_storage_types: Filesystem and Trusted Platform Module (TPM) that protects the data used to authenticate the device you are using. Rendezvous server Link to a server used by the device and later on, used on the process to find out who is the owner of the device Additional resources FIDO IoT spec A.5. FDO automatic onboarding technologies Following are the technologies used in context to FDO automatic onboarding. Table A.5. OSTree and rpm-ostree terminology Technology Definition UEFI Unified Extensible Firmware Interface. RHEL Red Hat(R) Enterprise Linux(R) operating system rpm-ostree Background image-based upgrades. Greenboot Healthcheck framework for systemd on rpm-ostree . Osbuild Pipeline-based build system for operating system artifacts. Container A Linux(R) container is a set of 1 or more processes that are isolated from the rest of the system. Coreos-installer Assists installation of RHEL images, boots systems with UEFI. FIDO FDO Specification protocol to provision configuration and onboarding devices. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/edge-terminology-and-commands_composing-installing-managing-rhel-for-edge-images |
14.8.9. smbclient | 14.8.9. smbclient smbclient <//server/share> <password> <options> The smbclient program is a versatile UNIX client which provides functionality similar to ftp . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-smbclient |
Chapter 6. Configuring action log storage for Elasticsearch and Splunk | Chapter 6. Configuring action log storage for Elasticsearch and Splunk By default, the three months of usage logs are stored in the Red Hat Quay database and exposed through the web UI on organization and repository levels. Appropriate administrative privileges are required to see log entries. For deployments with a large amount of logged operations, you can store the usage logs in Elasticsearch and Splunk instead of the Red Hat Quay database backend. 6.1. Configuring action log storage for Elasticsearch Note To configure action log storage for Elasticsearch, you must provide your own Elasticsearch stack, as it is not included with Red Hat Quay as a customizable component. Enabling Elasticsearch logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. The resulting configuration is stored in the config.yaml file. When configured, usage log access continues to be provided through the web UI for repositories and organizations. Use the following procedure to configure action log storage for Elasticsearch: Procedure Obtain an Elasticsearch account. Open the Red Hat Quay Config Tool (either during or after Red Hat Quay deployment). Scroll to the Action Log Storage Configuration setting and select Elasticsearch . The following figure shows the Elasticsearch settings that appear: Fill in the following information for your Elasticsearch instance: Elasticsearch hostname : The hostname or IP address of the system providing the Elasticsearch service. Elasticsearch port : The port number providing the Elasticsearch service on the host you just entered. Note that the port must be accessible from all systems running the Red Hat Quay registry. The default is TCP port 9200. Elasticsearch access key : The access key needed to gain access to the Elastic search service, if required. Elasticsearch secret key : The secret key needed to gain access to the Elastic search service, if required. AWS region : If you are running on AWS, set the AWS region (otherwise, leave it blank). Index prefix : Choose a prefix to attach to log entries. Logs Producer : Choose either Elasticsearch (default) or Kinesis to direct logs to an intermediate Kinesis stream on AWS. You need to set up your own pipeline to send logs from Kinesis to Elasticsearch (for example, Logstash). The following figure shows additional fields you would need to fill in for Kinesis: If you chose Elasticsearch as the Logs Producer, no further configuration is needed. If you chose Kinesis, fill in the following: Stream name : The name of the Kinesis stream. AWS access key : The name of the AWS access key needed to gain access to the Kinesis stream, if required. AWS secret key : The name of the AWS secret key needed to gain access to the Kinesis stream, if required. AWS region : The AWS region. When you are done, save the configuration. The configuration tool checks your settings. If there is a problem connecting to the Elasticsearch or Kinesis services, you will see an error and have the opportunity to continue editing. Otherwise, logging will begin to be directed to your Elasticsearch configuration after the cluster restarts with the new configuration. 6.2. Configuring action log storage for Splunk Splunk is an alternative to Elasticsearch that can provide log analyses for your Red Hat Quay data. Enabling Splunk logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. The resulting configuration is stored in the config.yaml file. When configured, usage log access continues to be provided through the Splunk web UI for repositories and organizations. Use the following procedures to enable Splunk for your Red Hat Quay deployment. 6.2.1. Installing and creating a username for Splunk Use the following procedure to install and create Splunk credentials. Procedure Create a Splunk account by navigating to Splunk and entering the required credentials. Navigate to the Splunk Enterprise Free Trial page, select your platform and installation package, and then click Download Now . Install the Splunk software on your machine. When prompted, create a username, for example, splunk_admin and password. After creating a username and password, a localhost URL will be provided for your Splunk deployment, for example, http://<sample_url>.remote.csb:8000/ . Open the URL in your preferred browser. Log in with the username and password you created during installation. You are directed to the Splunk UI. 6.2.2. Generating a Splunk token Use one of the following procedures to create a bearer token for Splunk. 6.2.2.1. Generating a Splunk token using the Splunk UI Use the following procedure to create a bearer token for Splunk using the Splunk UI. Prerequisites You have installed Splunk and created a username. Procedure On the Splunk UI, navigate to Settings Tokens . Click Enable Token Authentication . Ensure that Token Authentication is enabled by clicking Token Settings and selecting Token Authentication if necessary. Optional: Set the expiration time for your token. This defaults at 30 days. Click Save . Click New Token . Enter information for User and Audience . Optional: Set the Expiration and Not Before information. Click Create . Your token appears in the Token box. Copy the token immediately. Important If you close out of the box before copying the token, you must create a new token. The token in its entirety is not available after closing the New Token window. 6.2.2.2. Generating a Splunk token using the CLI Use the following procedure to create a bearer token for Splunk using the CLI. Prerequisites You have installed Splunk and created a username. Procedure In your CLI, enter the following CURL command to enable token authentication, passing in your Splunk username and password: USD curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false Create a token by entering the following CURL command, passing in your Splunk username and password. USD curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d Save the generated bearer token. 6.2.3. Configuring Red Hat Quay to use Splunk Use the following procedure to configure Red Hat Quay to use Splunk. Prerequisites You have installed Splunk and created a username. You have generated a Splunk bearer token. Procedure Open your Red Hat Quay config.yaml file and add the following configuration fields: --- LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb 1 port: 8089 2 bearer_token: <bearer_token> 3 url_scheme: <http/https> 4 verify_ssl: False 5 index_prefix: <splunk_log_index_name> 6 ssl_ca_path: <location_to_ssl-ca-cert.pem> 7 --- 1 String. The Splunk cluster endpoint. 2 Integer. The Splunk management cluster endpoint port. Differs from the Splunk GUI hosted port. Can be found on the Splunk UI under Settings Server Settings General Settings . 3 String. The generated bearer token for Splunk. 4 String. The URL scheme for access the Splunk service. If Splunk is configured to use TLS/SSL, this must be https . 5 Boolean. Whether to enable TLS/SSL. Defaults to true . 6 String. The Splunk index prefix. Can be a new, or used, index. Can be created from the Splunk UI. 7 String. The relative container path to a single .pem file containing a certificate authority (CA) for TLS/SSL validation. If you are configuring ssl_ca_path , you must configure the SSL/TLS certificate so that Red Hat Quay will trust it. If you are using a standalone deployment of Red Hat Quay, SSL/TLS certificates can be provided by placing the certificate file inside of the extra_ca_certs directory, or inside of the relative container path and specified by ssl_ca_path . If you are using the Red Hat Quay Operator, create a config bundle secret, including the certificate authority (CA) of the Splunk server. For example: USD oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret Specify the conf/stack/extra_ca_certs/splunkserver.crt file in your config.yaml . For example: LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com port: 8089 bearer_token: eyJra url_scheme: https verify_ssl: true index_prefix: quay123456 ssl_ca_path: conf/stack/splunkserver.crt 6.2.4. Creating an action log Use the following procedure to create a user account that can forward action logs to Splunk. Important You must use the Splunk UI to view Red Hat Quay action logs. At this time, viewing Splunk action logs on the Red Hat Quay Usage Logs page is unsupported, and returns the following message: Method not implemented. Splunk does not support log lookups . Prerequisites You have installed Splunk and created a username. You have generated a Splunk bearer token. You have configured your Red Hat Quay config.yaml file to enable Splunk. Procedure Log in to your Red Hat Quay deployment. Click on the name of the organization that you will use to create an action log for Splunk. In the navigation pane, click Robot Accounts Create Robot Account . When prompted, enter a name for the robot account, for example spunkrobotaccount , then click Create robot account . On your browser, open the Splunk UI. Click Search and Reporting . In the search bar, enter the name of your index, for example, <splunk_log_index_name> and press Enter . The search results populate on the Splunk UI, showing information like host , sourcetype , etc. By clicking the > arrow, you can see metadata for the logs, such as the ip , JSON metadata, and account name. | [
"curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false",
"curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d",
"--- LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb 1 port: 8089 2 bearer_token: <bearer_token> 3 url_scheme: <http/https> 4 verify_ssl: False 5 index_prefix: <splunk_log_index_name> 6 ssl_ca_path: <location_to_ssl-ca-cert.pem> 7 ---",
"oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret",
"LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com port: 8089 bearer_token: eyJra url_scheme: https verify_ssl: true index_prefix: quay123456 ssl_ca_path: conf/stack/splunkserver.crt"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/proc_manage-log-storage |
Chapter 3. Setting up the environment for an OpenShift installation | Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Creating a manifest object that includes a customized br-ex bridge As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal platform, you can create a MachineConfig object that includes an NMState configuration file. The NMState configuration file creates a customized br-ex bridge network configuration on each node in your cluster. Consider the following use cases for creating a manifest object that includes a customized br-ex bridge: You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not support making postinstallation changes to the bridge. You want to deploy the bridge on a different interface than the interface available on a host or server IP address. You want to make advanced configurations to the bridge that are not possible with the configure-ovs.sh shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces. Note If you require an environment with a single network interface controller (NIC) and default network settings, use the configure-ovs.sh shell script. After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine Config Operator injects Ignition configuration files into each node in your cluster, so that each node received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-ovs.sh shell script receives a signal to not configure the br-ex bridge. Prerequisites Optional: You have installed the nmstate API so that you can validate the NMState configuration. Procedure Create a NMState configuration file that has decoded base64 information for your customized br-ex bridge network: Example of an NMState configuration for a customized br-ex bridge network interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false # ... 1 Name of the interface. 2 The type of ethernet. 3 The requested state for the interface after creation. 4 Disables IPv4 and IPv6 in this example. 5 The node NIC to which the bridge attaches. Use the cat command to base64-encode the contents of the NMState configuration: USD cat <nmstate_configuration>.yaml | base64 1 1 Replace <nmstate_configuration> with the name of your NMState resource YAML file. Create a MachineConfig manifest file and define a customized br-ex bridge network configuration analogous to the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml # ... 1 For each node in your cluster, specify the hostname path to your node and the base-64 encoded Ignition configuration file data for the machine type. If you have a single global configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that you want to apply to all nodes in your cluster, you do not need to specify the hostname path for each node. The worker role is the default role for nodes in your cluster. The .yaml extension does not work when specifying the hostname path for each node or all nodes in the MachineConfig manifest file. 2 The name of the policy. 3 Writes the encoded base64 information to the specified path. 3.5.1. Optional: Scaling each machine set to compute nodes To apply a customized br-ex bridge configuration to all compute nodes in your OpenShift Container Platform cluster, you must edit your MachineConfig custom resource (CR) and modify its roles. Additionally, you must create a BareMetalHost CR that defines information for your bare-metal machine, such as hostname, credentials, and so on. After you configure these resources, you must scale machine sets, so that the machine sets can apply the resource configuration to each compute node and reboot the nodes. Prerequisites You created a MachineConfig manifest object that includes a customized br-ex bridge configuration. Procedure Edit the MachineConfig CR by entering the following command: USD oc edit mc <machineconfig_custom_resource_name> Add each compute node configuration to the CR, so that the CR can manage roles for each defined compute node in your cluster. Create a Secret object named extraworker-secret that has a minimal static IP configuration. Apply the extraworker-secret secret to each node in your cluster by entering the following command. This step provides each compute node access to the Ignition config file. USD oc apply -f ./extraworker-secret.yaml Create a BareMetalHost resource and specify the network secret in the preprovisioningNetworkDataName parameter: Example BareMetalHost resource with an attached network secret apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: # ... preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret # ... To manage the BareMetalHost object within the openshift-machine-api namespace of your cluster, change to the namespace by entering the following command: USD oc project openshift-machine-api Get the machine sets: USD oc get machinesets Scale each machine set by entering the following command. You must run this command for each machine set. USD oc scale machineset <machineset_name> --replicas=<n> 1 1 Where <machineset_name> is the name of the machine set and <n> is the number of compute nodes. 3.6. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and compute nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. This often involves using different network segments or subnets for the remote nodes than the subnet used by the control plane and local compute nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. Before installing OpenShift Container Platform, you must configure the network properly to ensure that the edge subnets containing the remote nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. You can run control plane nodes in the same subnet or multiple subnets by configuring a user-managed load balancer in place of the default load balancer. With a multiple subnet environment, you can reduce the risk of your OpenShift Container Platform cluster from failing because of a hardware failure or a network outage. For more information, see "Services for a user-managed load balancer" and "Configuring a user-managed load balancer". Running control plane nodes in a multiple subnet environment requires completion of the following key tasks: Configuring a user-managed load balancer instead of the default load balancer by specifying UserManaged in the loadBalancer.type parameter of the install-config.yaml file. Configuring a user-managed load balancer address in the ingressVIPs and apiVIPs parameters of the install-config.yaml file. Adding the multiple subnet Classless Inter-Domain Routing (CIDR) and the user-managed load balancer IP addresses to the networking.machineNetworks parameter in the install-config.yaml file. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia and idrac-virtualmedia . This procedure details the network configuration required to allow the remote compute nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote compute nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local compute nodes. The second subnet ( 192.168.0.0 ) contains the edge compute nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote compute node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully by running the following command: # ip route Repeat the steps for each compute node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. After you have configured the networks, test the connectivity to ensure the remote nodes can reach the control plane nodes and the control plane nodes can reach the remote nodes. From the control plane nodes in the first subnet, ping a remote node in the second subnet by running the following command: USD ping <remote_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. From the remote nodes in the second subnet, ping a control plane node in the first subnet by running the following command: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote compute nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. 3.7. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.16 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.8. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.9. Optional: Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.10. Services for a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Configuring a user-managed load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for a user-managed load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for user-managed load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.10.1. Configuring a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer. Note MetalLB, which runs on a cluster, functions as a user-managed load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples show health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration. Example HAProxy configuration with one listed subnet # ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Example HAProxy configuration with multiple listed subnets # ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ... Use the curl CLI command to verify that the user-managed load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's install-config.yaml file: # ... platform: baremetal: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3 # ... 1 Set UserManaged for the type parameter to specify a user-managed load balancer for your cluster. The parameter defaults to OpenShiftManagedDefault , which denotes the default internal load balancer. For services defined in an openshift-kni-infra namespace, a user-managed load balancer can deploy the coredns service to pods in your cluster but ignores keepalived and haproxy services. 2 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. 3 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Verification Use the curl CLI command to verify that the user-managed load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.11. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.12. Configuring the install-config.yaml file 3.12.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the compute machines based on the number of compute nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one compute node. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticDNS configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare-metal network. 5 See the BMC addressing sections for more options. 6 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.12.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and compute nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for compute nodes even if there are zero nodes. Replicas sets the number of compute nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane nodes. Replicas sets the number of control plane nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master (control plane node) or worker (compute node). bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.12.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. You can modify the BMC address during installation while the node is in the Registering state. If you need to modify the BMC address after the node leaves the Registering state, you must disconnect the node from Ironic, edit the BareMetalHost resource, and reconnect the node to Ironic. See the Editing a BareMetalHost resource section for details. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Additional resources Editing a BareMetalHost resource 3.12.4. Verifying support for Redfish APIs When installing using the Redfish API, the installation program calls several Redfish endpoints on the baseboard management controller (BMC) when using installer-provisioned infrastructure on bare metal. If you use Redfish, ensure that your BMC supports all of the Redfish APIs before installation. Procedure Set the IP address or hostname of the BMC by running the following command: USD export SERVER=<ip_address> 1 1 Replace <ip_address> with the IP address or hostname of the BMC. Set the ID of the system by running the following command: USD export SystemID=<system_id> 1 1 Replace <system_id> with the system ID. For example, System.Embedded.1 or 1 . See the following vendor-specific BMC sections for details. List of Redfish APIs Check power on support by running the following command: USD curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Check power off support by running the following command: USD curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Check the temporary boot implementation that uses pxe by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Check the status of setting the firmware boot mode that uses Legacy or UEFI by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of Redfish virtual media APIs Check the ability to set the temporary boot device that uses cd or dvd by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Virtual media might use POST or PATCH , depending on your hardware. Check the ability to mount virtual media by running one of the following commands: USD curl -u USDUSER:USDPASS -X POST -H "Content-Type: application/json" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for Redfish APIs are the same for the Redfish virtual media APIs. In some hardware, you might only find the VirtualMedia resource under Systems/USDSystemID instead of Managers/USDManagerID . For the VirtualMedia resource, the UserName and Password fields are optional. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.12.5. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.12.6. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.12.7. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.12.8. BMC addressing for Cisco CIMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Cisco UCS UCSX-210C-M6 hardware, Red Hat supports Cisco Integrated Management Controller (CIMC). Table 3.6. BMC address format for Cisco CIMC Protocol Address Format Redfish virtual media redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> To enable Redfish virtual media for Cisco UCS UCSX-210C-M6 hardware, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration by using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True 3.12.9. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.7. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.12.10. Optional: Setting proxy settings To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 3.12.11. Optional: Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.12.12. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 3.12.13. Optional: Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.12.14. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.12.15. Optional: Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the RAM disk and the cluster configuration files. If you do not configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.12.16. Optional: Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.12.17. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.12.18. Optional: Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.13. Manifest configuration files 3.13.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.13.2. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When compute nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.16.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the compute nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.13.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy compute nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote nodes in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.13.4. Optional: Deploying routers on compute nodes During installation, the installation program deploys router pods on compute nodes. By default, the installation program installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one compute node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one compute node, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installation program deploys two routers. If the cluster has no compute nodes, the installation program deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one compute node, set replicas: to 1 . If working with more than 3 compute nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.13.5. Optional: Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Bare metal configuration 3.13.6. Optional: Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) using baseboard management controllers (BMCs) during the installation process. Note If you want to configure a hardware RAID for the node, verify that the node has a supported RAID controller. OpenShift Container Platform 4.16 does not support software RAID. Table 3.8. Hardware RAID support by vendor Vendor BMC and protocol Firmware version RAID levels Fujitsu iRMC N/A 0, 1, 5, 6, and 10 Dell iDRAC with Redfish Version 6.10.30.20 or later 0, 1, and 5 Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.16 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.13.7. Optional: Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Bare metal configuration Partition naming scheme 3.14. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.14.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.14.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.14.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.15. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created the OpenShift Container Platform manifests. ❏ (Optional) Deployed routers on compute nodes. ❏ (Optional) Created a disconnected registry. ❏ (Optional) Validate disconnected registry settings if in use. | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.16",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: baremetal: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"export SERVER=<ip_address> 1",
"export SystemID=<system_id> 1",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X POST -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.16.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow |
10.8. Configuring VLAN switchport mode | 10.8. Configuring VLAN switchport mode Red Hat Enterprise Linux machines are often used as routers and enable an advanced VLAN configuration on their network interfaces. You need to set switchport mode when the Ethernet interface is connected to a switch and there are VLANs running over the physical interface. A Red Hat Enterprise Linux server or workstation is usually connected to only one VLAN, which makes switchport mode access suitable, and the default setting. In certain scenarios, multiple tagged VLANs use the same physical link, that is Ethernet between the switch and Red Hat Enterprise Linux machine, which requires switchport mode trunk to be configured on both ends. For example, when a Red Hat Enterprise Linux machine is used as a router, the machine needs to forward tagged packets from the various VLANs behind the router to the switch over the same physical Ethernet, and maintain separation between those VLANs. With the setup described, for example, in Section 10.3, "Configure 802.1Q VLAN Tagging Using the Command Line Tool, nmcli" , use the Cisco switchport mode trunk . If you only set an IP address on an interface, use Cisco switchport mode access . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configure_802_1q_vlan_tagging-configuring-vlan-switchpport-mode |
5.8. Using Certificate Transparency | 5.8. Using Certificate Transparency Certificate System provides a basic version of Certificate Transparency (CT) V1 support (rfc 6962). It has the capability of issuing certificates with embedded Signed Certificate Time stamps (SCTs) from any trusted log where each deployment site choses to have its root CA cert included. You can also configure the system to support multiple CT logs. A minimum of one trusted CT log is required for this feature to work. Important It is the responsibility of the deployment site to establish its trust relationship with a trusted CT log server. For more information on how to configure Certificate Transparency, see the Configuring Certificate Transparency section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 5.8.1. Testing Certificate Transparency As example on how to test a CT setup, the following procedure describes an actual test against Google CT test logs. A more comprehensive test procedure would involve setting up a TLS server and test for the inclusion of its certs from its specified CT logs. However, the following serves as a quick test that checks for inclusion of the SCT extension once a certificate has been issued. The test procedure consists in generating and submitting a Certificate Signing Request (CSR), in order to verify its SCT extension using openssl . The test configuration in the CS.cfg file is as follows: First, generate a CSR, e.g: , submit the CSR to an enrollment profile depending on the CT mode defined by the ca.certTransparency.mode parameter in CS.cfg : if the parameter is set to enabled , use any enrollment profile if the parameter is set to perProfile , use one of the CT profiles: e.g. caServerCertWithSCT Copy the issued b64 cert into a file, e.g. .ct1.pem . Convert the pem to binary: Display the DER certificate content: Observe that the SCT extension is present, e.g: Alternatively, verify the SCT by running an asn1 dump: and observe the hex dump, e.g: | [
"ca.certTransparency.mode=enabled ca.certTransparency.log.1.enable=true ca.certTransparency.log.1.pubKey=MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEw8i8S7qiGEs9NXv0ZJFh6uuOm<snip> ca.certTransparency.log.1.url=http://ct.googleapis.com:80/testtube/ ca.certTransparency.log.1.version=1 ca.certTransparency.log.2.enable=true ca.certTransparency.log.2.pubKey=MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEKATl2B3SAbxyzGOfNRB+AytNTG<snip> ca.certTransparency.log.2.url=http://ct.googleapis.com:80/logs/crucible/ ca.certTransparency.log.2.version=1 ca.certTransparency.log.3.enable=false ca.certTransparency.log.3.pubKey=MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEiKfWtuoWCPMEzSKySjMjXpo38W<snip> ca.certTransparency.log.3.url=http://ct.googleapis.com:80/logs/solera2020/ ca.certTransparency.log.3.version=1 ca.certTransparency.log.num=3",
"PKCS10Client -d . -p passwd -l 2048 -n \"cn= user.test.domain.com ,OU= user-TEST , O=TestDomain \" -o pkcs10-TLS.req",
"AtoB ct1.pem ct1.bin",
"openssl x509 -noout -text -inform der -in ct1.bin",
"CT Precertificate SCTs: Signed Certificate Timestamp: Version : v1 (0x0) Log ID : B0:CC:83:E5:A5:F9:7D:6B:AF:7C:09:CC:28:49:04:87: 2A:C7:E8:8B:13:2C:63:50:B7:C6:FD:26:E1:6C:6C:77 Timestamp : Jun 11 23:07:14.146 2020 GMT Extensions: none Signature : ecdsa-with-SHA256 30:44:02:20:6E:E7:DC:D6:6B:A6:43:E3:BB:8E:1D:28: 63:C6:6B:03:43:4E:7A:90:0F:D6:2B:E8:ED:55:1D:5F: 86:0C:5A:CE:02:20:53:EB:75:FA:75:54:9C:9F:D3:7A: D4:E7:C6:6C:9B:33:2A:75:D8:AB:DE:7D:B9:FA:2B:19: 56:22:BB:EF:19:AD Signed Certificate Timestamp: Version : v1 (0x0) Log ID : C3:BF:03:A7:E1:CA:88:41:C6:07:BA:E3:FF:42:70:FC: A5:EC:45:B1:86:EB:BE:4E:2C:F3:FC:77:86:30:F5:F6 Timestamp : Jun 11 23:07:14.516 2020 GMT Extensions: none Signature : ecdsa-with-SHA256 30:44:02:20:4A:C9:4D:EF:64:02:A7:69:FF:34:4E:41: F4:87:E1:6D:67:B9:07:14:E6:01:47:C2:0A:72:88:7A: A9:C3:9C:90:02:20:31:26:15:75:60:1E:E2:C0:A3:C2: ED:CF:22:A0:3B:A4:10:86:D1:C1:A3:7F:68:CC:1A:DD: 6A:5E:10:B2:F1:8F",
"openssl asn1parse -i -inform der -in ct1.bin",
"740:d=4 hl=4 l= 258 cons: SEQUENCE 744:d=5 hl=2 l= 10 prim: OBJECT :CT Precertificate SCTs 756:d=5 hl=3 l= 243 prim: OCTET STRING [HEX DUMP]:0481F000EE007500B0CC83E5A5F97D6B<snip>"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/certificate_transparency |
Chapter 3. Internal storage services | Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud [Technology Preview] Red Hat Virtualization 4.4.x or higher (installer-provisioned infrastructure) Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/planning_your_deployment/internal-storage-services_rhodf |
25.5. Storing a Service Secret in a Vault | 25.5. Storing a Service Secret in a Vault This section shows how an administrator can use vaults to securely store a service secret in a centralized location. The service secret is encrypted with the service public key. The service then retrieves the secret using its private key on any machine in the domain. Only the service and the administrator are allowed to access the secret. This section includes these procedures: Section 25.5.1, "Creating a User Vault to Store a Service Password" Section 25.5.2, "Provisioning a Service Password from a User Vault to Service Instances" Section 25.5.3, "Retrieving a Service Password for a Service Instance" Section 25.5.4, "Changing Service Vault Password" In the procedures: admin is the administrator who manages the service password http_password is the name of the private user vault created by the administrator password.txt is the file containing the service password password_vault is the vault created for the service HTTP/server.example.com is the service whose password is being archived service-public.pem is the service public key used to encrypt the password stored in password_vault 25.5.1. Creating a User Vault to Store a Service Password Create an administrator-owned user vault, and use it to store the service password. The vault type is standard, which ensures the administrator is not required to authenticate when accessing the contents of the vault. Log in as the administrator: Create a standard user vault: Archive the service password into the vault: Warning After archiving the password into the vault, delete password.txt from your system. 25.5.2. Provisioning a Service Password from a User Vault to Service Instances Using an asymmetric vault created for the service, provision the service password to a service instance. Log in as the administrator: Obtain the public key of the service instance. For example, using the openssl utility: Generate the service-private.pem private key. Generate the service-public.pem public key based on the private key. Create an asymmetric vault as the service instance vault, and provide the public key: The password archived into the vault will be protected with the key. Retrieve the service password from the administrator's private vault, and then archive it into the new service vault: This encrypts the password with the service instance public key. Warning After archiving the password into the vault, delete password.txt from your system. Repeat these steps for every service instance that requires the password. Create a new asymmetric vault for each service instance. 25.5.3. Retrieving a Service Password for a Service Instance A service instance can retrieve the service vault password using the locally-stored service private key. Log in as the administrator: Obtain a Kerberos ticket for the service: Retrieve the service vault password: 25.5.4. Changing Service Vault Password If a service instance is compromised, isolate it by changing the service vault password and then re-provisioning the new password to non-compromised service instances only. Archive the new password in the administrator's user vault: This overwrites the current password stored in the vault. Re-provision the new password to each service instance excluding the compromised instance. Retrieve the new password from the administrator's vault: Archive the new password into the service instance vault: Warning After archiving the password into the vault, delete password.txt from your system. | [
"kinit admin",
"ipa vault-add http_password --type standard --------------------------- Added vault \"http_password\" --------------------------- Vault name: http_password Type: standard Owner users: admin Vault user: admin",
"ipa vault-archive http_password --in password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------",
"kinit admin",
"openssl genrsa -out service-private.pem 2048 Generating RSA private key, 2048 bit long modulus .+++ ...........................................+++ e is 65537 (0x10001)",
"openssl rsa -in service-private.pem -out service-public.pem -pubout writing RSA key",
"ipa vault-add password_vault --service HTTP/server.example.com --type asymmetric --public-key-file service-public.pem ---------------------------- Added vault \"password_vault\" ---------------------------- Vault name: password_vault Type: asymmetric Public key: LS0tLS1C...S0tLS0tCg== Owner users: admin Vault service: HTTP/[email protected]",
"ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------",
"ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------",
"kinit admin",
"kinit HTTP/server.example.com -k -t /etc/httpd/conf/ipa.keytab",
"ipa vault-retrieve password_vault --service HTTP/server.example.com --private-key-file service-private.pem --out password.txt ------------------------------------ Retrieved data from vault \"password_vault\" ------------------------------------",
"ipa vault-archive http_password --in new_password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------",
"ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------",
"ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/vault-service |
Chapter 1. Overview | Chapter 1. Overview Red Hat OpenShift AI is an artificial intelligence (AI) platform that provides tools to rapidly train, serve, and monitor machine learning (ML) models onsite, in the public cloud, or at the edge. OpenShift AI provides a powerful AI/ML platform for building AI-enabled applications. Data scientists and MLOps engineers can collaborate to move from experiment to production in a consistent environment quickly. You can deploy OpenShift AI on any supported version of OpenShift, whether on-premise, in the cloud, or in disconnected environments. For details on supported versions, see Red Hat OpenShift AI: Supported Configurations . 1.1. Data science workflow For the purpose of getting you started with OpenShift AI, Figure 1 illustrates a simplified data science workflow. The real world process of developing ML models is an iterative one. The simplified data science workflow for predictive AI use cases includes the following tasks: Defining your business problem and setting goals to solve it. Gathering, cleaning, and preparing data. Data often has to be federated from a range of sources, and exploring and understanding data plays a key role in the success of a data science project. Evaluating and selecting ML models for your business use case. Train models for your business use case by tuning model parameters based on your set of training data. In practice, data scientists train a range of models, and compare performance while considering tradeoffs such as time and memory constraints. Integrate models into an application, including deployment and testing. After model training, the step of the workflow is production. Data scientists are often responsible for putting the model in production and making it accessible so that a developer can integrate the model into an application. Monitor and manage deployed models. Depending on the organization, data scientists, data engineers, or ML engineers must monitor the performance of models in production, tracking prediction and performance metrics. Refine and retrain models. Data scientists can evaluate model performance results and refine models to improve outcome by excluding or including features, changing the training data, and modifying other configuration parameters. 1.2. About this guide This guide assumes you are familiar with data science and ML Ops concepts. It describes the following tasks to get you started with using OpenShift AI: Log in to the OpenShift AI dashboard Create a data science project If you have data stored in Object Storage, configure a connection to more easily access it Create a workbench and choose an IDE, such as JupyterLab or code-server, for your data scientist development work Learn where to get information about the steps: Developing and training a model Automating the workflow with pipelines Implementing distributed workloads Testing your model Deploying your model Monitoring and managing your model See also OpenShift AI tutorial: Fraud detection example . It provides step-by-step guidance for using OpenShift AI to develop and train an example model in JupyterLab, deploy the model, and refine the model by using automated pipelines. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/getting_started_with_red_hat_openshift_ai_cloud_service/overview-for-getting-started_get-started |
4.11. Fence Virt (Serial/VMChannel Mode) | 4.11. Fence Virt (Serial/VMChannel Mode) Table 4.12, "Fence virt (Serial/VMChannel Mode)" lists the fence device parameters used by fence_virt , the fence agent for virtual machines using VM channel or serial mode . Table 4.12. Fence virt (Serial/VMChannel Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Serial Device serial_device On the host, the serial device must be mapped in each domain's configuration file. For more information, see the fence_virt man page. If this field is specified, it causes the fence_virt fencing agent to operate in serial mode. Not specifying a value causes the fence_virt fencing agent to operate in VM channel mode. Serial Parameters serial_params The serial parameters. The default is 115200, 8N1. VM Channel IP Address channel_address The channel IP. The default value is 10.0.2.179. Timeout (optional) timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. ipport The channel port. The default value is 1229, which is the value used when configuring this fence device with luci . Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0. The following command creates a fence device instance for virtual machines using serial mode. The following is the cluster.conf entry for the fence_virt device: | [
"ccs -f cluster.conf --addfencedev fencevirt1 agent=fence_virt serial_device=/dev/ttyS1 serial_params=19200, 8N1",
"<fencedevices> <fencedevice agent=\"fence_virt\" name=\"fencevirt1\" serial_device=\"/dev/ttyS1\" serial_params=\"19200, 8N1\"/> </fencedevices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-virt-CA |
Chapter 4. Failover, load-balancing, and high-availability in IdM | Chapter 4. Failover, load-balancing, and high-availability in IdM Identity Management (IdM) has built-in failover mechanisms for IdM clients, and load-balancing and high-availability features for IdM servers. Client-side failover capability By default, the SSSD service on an IdM client is configured to use DNS service (SRV) resource records so that the client can automatically determine the best IdM server to connect to. Primary and backup server configuration The server resolution behavior is controlled by the _srv_ option in the ipa_server parameter of the /etc/sssd/sssd.conf file: Example /etc/sssd/sssd.conf With the _srv_ option specified, SSSD retrieves a list of IdM servers ordered by preference. If a primary server goes offline, the SSSD service on the IdM client automatically connects to another available IdM server. Primary servers are specified in the ipa_server parameter. SSSD attempts to connect to primary servers first and switches to backup servers only if no primary servers are available. The _srv_ option is not supported for backup servers. Note SSSD queries SRV records from the DNS server. By default, SSSD waits for 6 seconds for a reply from the DNS resolver before attempting to query another DNS server. If all DNS servers are unreachable, the domain will continue to operate in offline mode. You can use the dns_resolver_timeout option to increase the time the client waits for a reply from the DNS resolver. If you prefer to bypass DNS lookups for performance reasons, remove the _srv_ entry from the ipa_server parameter and specify which IdM servers the client should connect to, in order of preference: Example /etc/sssd/sssd.conf Failover behavior for IdM servers and services SSSD failover mechanism treats an IdM server and its services independently. If the hostname resolution for a server succeeds, SSSD considers the machine is online and tries to connect to the required service on that machine. If the connection to the service fails, SSSD considers only that specific service as offline, not the entire machine or other services on it. If hostname resolution fails, SSSD considers the entire machine as offline, and does not attempt to connect to any services on that machine. When all primary servers are unavailable, SSSD attempts to connect to a configured backup server. While connected to a backup server, SSSD periodically attempts to reconnect to one of the primary servers and connects immediately once a primary server becomes available. The interval between these attempts is controlled by the failover_primary_timeout option , which defaults to 31 seconds. If all IdM servers become unreachable, SSSD switches to offline mode. In this state, SSSD retries connections every 30 seconds until a server becomes available. Server-side load-balancing and service availability You can achieve load-balancing and high-availability in IdM by installing multiple IdM replicas: If you have a geographically dispersed network, you can shorten the path between IdM clients and the nearest accessible server by configuring multiple IdM replicas per data center. Red Hat supports environments with up to 60 replicas. The IdM replication mechanism provides active/active service availability: services at all IdM replicas are readily available at the same time. Note Red Hat recommends against combining IdM and other load-balancing or high-availability (HA) software. Many third-party high availability solutions assume active/passive scenarios and cause unnecessary service interruption to IdM availability. Other solutions use virtual IPs or a single hostname per clustered service. All these methods do not typically work well with the type of service availability provided by the IdM solution. They also integrate very poorly with Kerberos, decreasing the overall security and stability of the deployment. Additional resources sssd.conf(5) man pages on your system | [
"[domain/example.com] id_provider = ipa ipa_server = _srv_ , server1.example.com, server2.example.com ipa_backup_server = backup1.example.com, backup2.example.com",
"[domain/example.com] id_provider = ipa ipa_server = server1.example.com, server2.example.com ipa_backup_server = backup1.example.com, backup2.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/tuning_performance_in_identity_management/failover-load-balancing-high-availability_tuning-performance-in-idm |
GitOps | GitOps OpenShift Container Platform 4.14 A declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/gitops/index |
Chapter 11. Custom credential types | Chapter 11. Custom credential types As a system administrator, you can define a custom credential type in a standard format by using a YAML or JSON-like definition. You can define a custom credential type that works in ways similar to existing credential types. For example, a custom credential type can inject an API token for a third-party web service into an environment variable, for your playbook or custom inventory script to consume. Custom credentials support the following ways of injecting their authentication information: Environment variables Ansible extra variables File-based templating, which means generating .ini or .conf files that contain credential values You can attach one SSH and multiple cloud credentials to a job template. Each cloud credential must be of a different type. Only one of each type of credential is permitted. Vault credentials and machine credentials are separate entities. Note When creating a new credential type, you must avoid collisions in the extra_vars , env , and file namespaces. Environment variable or extra variable names must not start with ANSIBLE_ because they are reserved. You must have System administrator (superuser) permissions to be able to create and edit a credential type ( CredentialType ) and to be able to view the CredentialType.injection field. 11.1. Content sourcing from collections A "managed" credential type of kind=galaxy represents a content source for fetching collections defined in requirements.yml when project updates are run. Examples of content sources are galaxy.ansible.com, console.redhat.com, or on-premise automation hub. This new credential type represents a URL and (optional) authentication details necessary to construct the environment variables when a project update runs ansible-galaxy collection install as described in the Ansible documentation, Configuring the ansible-galaxy client . It has fields that map directly to the configuration options exposed to the Ansible Galaxy CLI, for example, per-server. An endpoint in the API reflects an ordered list of these credentials at the Organization level: /api/v2/organizations/N/galaxy_credentials/ When installations of automation controller migrate existing Galaxy-oriented setting values, post-upgrade proper credentials are created and attached to every Organization. After upgrading to the latest version, every organization that existed before upgrade now has a list of one or more "Galaxy" credentials associated with it. Additionally, post-upgrade, these settings are not visible (or editable) from the /api/v2/settings/jobs/ endpoint. Automation controller continues to fetch roles directly from public Galaxy even if galaxy.ansible.com is not the first credential in the list for the organization. The global Galaxy settings are no longer configured at the jobs level, but at the organization level in the user interface. The organization's Add and Edit windows have an optional Credential lookup field for credentials of kind=galaxy . It is important to specify the order of these credentials as order sets precedence for the sync and lookup of the content. For more information, see Creating an organization . For more information about how to set up a project by using collections, see Using Collections with automation hub . 11.2. Backwards-Compatible API considerations Support for version 2 of the API ( api/v2/ ) means a one-to-many relationship for job templates to credentials (including multicloud support). You can filter credentials the v2 API: curl "https://controller.example.org/api/v2/credentials/?credential_type__namespace=aws" In the V2 Credential Type model, the relationships are defined as follows: Machine SSH Vault Vault Network Sets environment variables, for example ANSIBLE_NET_AUTHORIZE SCM Source Control Cloud EC2, AWS Cloud Lots of others Insights Insights Galaxy galaxy.ansible.com, console.redhat.com Galaxy on-premise automation hub 11.3. Content verification Automation controller uses GNU Privacy Guard (GPG) to verify content. For more information, see The GNU Privacy Handbook . 11.4. Getting started with credential types From the navigation panel, select Administration Credential Types . If no custom credential types have been created, the Credential Types prompts you to add one. If credential types have been created, this page displays a list of existing and available Credential Types. To view more information about a credential type, click the name of a credential or the Edit icon. Each credential type displays its own unique configurations in the Input Configuration field and the Injector Configuration field, if applicable. Both YAML and JSON formats are supported in the configuration fields. 11.5. Creating a new credential type To create a new credential type: Procedure In the Credential Types view, click Add . Enter the appropriate details in the Name and Description field. Note When creating a new credential type, do not use reserved variable names that start with ANSIBLE_ for the INPUT and INJECTOR names and IDs, as they are invalid for custom credential types. In the Input Configuration field, specify an input schema that defines a set of ordered fields for that type. The format can be in YAML or JSON: YAML fields: - type: string id: username label: Username - type: string id: password label: Password secret: true required: - username - password View more YAML examples at the YAML page . JSON { "fields": [ { "type": "string", "id": "username", "label": "Username" }, { "secret": true, "type": "string", "id": "password", "label": "Password" } ], "required": ["username", "password"] } View more JSON examples at The JSON website . The following configuration in JSON format shows each field and how they are used: { "fields": [{ "id": "api_token", # required - a unique name used to reference the field value "label": "API Token", # required - a unique label for the field "help_text": "User-facing short text describing the field.", "type": ("string" | "boolean") # defaults to 'string' "choices": ["A", "B", "C"] # (only applicable to `type=string`) "format": "ssh_private_key" # optional, can be used to enforce data format validity for SSH private key data (only applicable to `type=string`) "secret": true, # if true, the field value will be encrypted "multiline": false # if true, the field should be rendered as multi-line for input entry # (only applicable to `type=string`) },{ # field 2... },{ # field 3... }], "required": ["api_token"] # optional; one or more fields can be marked as required }, When type=string , fields can optionally specify multiple choice options: { "fields": [{ "id": "api_token", # required - a unique name used to reference the field value "label": "API Token", # required - a unique label for the field "type": "string", "choices": ["A", "B", "C"] }] }, In the Injector Configuration field, enter environment variables or extra variables that specify the values a credential type can inject. The format can be in YAML or JSON (see examples in the step). The following configuration in JSON format shows each field and how they are used: { "file": { "template": "[mycloud]\ntoken={{ api_token }}" }, "env": { "THIRD_PARTY_CLOUD_API_TOKEN": "{{ api_token }}" }, "extra_vars": { "some_extra_var": "{{ username }}:{{ password }}" } } Credential Types can also generate temporary files to support .ini files or certificate or key data: { "file": { "template": "[mycloud]\ntoken={{ api_token }}" }, "env": { "MY_CLOUD_INI_FILE": "{{ tower.filename }}" } } In this example, automation controller writes a temporary file that has: [mycloud]\ntoken=SOME_TOKEN_VALUE The absolute file path to the generated file is stored in an environment variable named MY_CLOUD_INI_FILE . The following is an example of referencing many files in a custom credential template: Inputs { "fields": [{ "id": "cert", "label": "Certificate", "type": "string" },{ "id": "key", "label": "Key", "type": "string" }] } Injectors { "file": { "template.cert_file": "[mycert]\n{{ cert }}", "template.key_file": "[mykey]\n{{ key }}" }, "env": { "MY_CERT_INI_FILE": "{{ tower.filename.cert_file }}", "MY_KEY_INI_FILE": "{{ tower.filename.key_file }}" } } Click Save . Your newly created credential type is displayed on the list of credential types: Click the Edit icon to modify the credential type options. Note In the Edit screen, you can modify the details or delete the credential. If the Delete option is disabled, this means that the credential type is being used by a credential, and you must delete the credential type from all the credentials that use it before you can delete it. Verification Verify that the newly created credential type can be selected from the Credential Type selection window when creating a new credential: Additional resources For information about how to create a new credential, see Creating a credential . | [
"/api/v2/organizations/N/galaxy_credentials/",
"curl \"https://controller.example.org/api/v2/credentials/?credential_type__namespace=aws\"",
"fields: - type: string id: username label: Username - type: string id: password label: Password secret: true required: - username - password",
"{ \"fields\": [ { \"type\": \"string\", \"id\": \"username\", \"label\": \"Username\" }, { \"secret\": true, \"type\": \"string\", \"id\": \"password\", \"label\": \"Password\" } ], \"required\": [\"username\", \"password\"] }",
"{ \"fields\": [{ \"id\": \"api_token\", # required - a unique name used to reference the field value \"label\": \"API Token\", # required - a unique label for the field \"help_text\": \"User-facing short text describing the field.\", \"type\": (\"string\" | \"boolean\") # defaults to 'string' \"choices\": [\"A\", \"B\", \"C\"] # (only applicable to `type=string`) \"format\": \"ssh_private_key\" # optional, can be used to enforce data format validity for SSH private key data (only applicable to `type=string`) \"secret\": true, # if true, the field value will be encrypted \"multiline\": false # if true, the field should be rendered as multi-line for input entry # (only applicable to `type=string`) },{ # field 2 },{ # field 3 }], \"required\": [\"api_token\"] # optional; one or more fields can be marked as required },",
"{ \"fields\": [{ \"id\": \"api_token\", # required - a unique name used to reference the field value \"label\": \"API Token\", # required - a unique label for the field \"type\": \"string\", \"choices\": [\"A\", \"B\", \"C\"] }] },",
"{ \"file\": { \"template\": \"[mycloud]\\ntoken={{ api_token }}\" }, \"env\": { \"THIRD_PARTY_CLOUD_API_TOKEN\": \"{{ api_token }}\" }, \"extra_vars\": { \"some_extra_var\": \"{{ username }}:{{ password }}\" } }",
"{ \"file\": { \"template\": \"[mycloud]\\ntoken={{ api_token }}\" }, \"env\": { \"MY_CLOUD_INI_FILE\": \"{{ tower.filename }}\" } }",
"[mycloud]\\ntoken=SOME_TOKEN_VALUE",
"{ \"fields\": [{ \"id\": \"cert\", \"label\": \"Certificate\", \"type\": \"string\" },{ \"id\": \"key\", \"label\": \"Key\", \"type\": \"string\" }] }",
"{ \"file\": { \"template.cert_file\": \"[mycert]\\n{{ cert }}\", \"template.key_file\": \"[mykey]\\n{{ key }}\" }, \"env\": { \"MY_CERT_INI_FILE\": \"{{ tower.filename.cert_file }}\", \"MY_KEY_INI_FILE\": \"{{ tower.filename.key_file }}\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/assembly-controller-custom-credentials |
Using the Data Grid Command Line Interface | Using the Data Grid Command Line Interface Red Hat Data Grid 8.5 Access and manage remote caches with the Data Grid CLI Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_data_grid_command_line_interface/index |
Chapter 76. KafkaClientAuthenticationOAuth schema reference | Chapter 76. KafkaClientAuthenticationOAuth schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationOAuth schema properties To configure OAuth client authentication, set the type property to oauth . OAuth authentication can be configured using one of the following options: Client ID and secret Client ID and refresh token Access token Username and password TLS Client ID and secret You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret property, specify a link to a Secret containing the client secret. An example of OAuth client authentication using client ID and client secret authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret Optionally, scope and audience can be specified if needed. Client ID and refresh token You can configure the address of your OAuth server in the tokenEndpointUri property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken property, specify a link to a Secret containing the refresh token. An example of OAuth client authentication using client ID and refresh token authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token Access token You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri . In the accessToken property, specify a link to a Secret containing the access token. An example of OAuth client authentication using only an access token authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token Username and password OAuth username and password configuration uses the OAuth Resource Owner Password Grant mechanism. The mechanism is deprecated, and is only supported to enable integration in environments where client credentials (ID and secret) cannot be used. You might need to use user accounts if your access management system does not support another approach or user accounts are required for authentication. A typical approach is to create a special user account in your authorization server that represents your client application. You then give the account a long randomly generated password and a very limited set of permissions. For example, the account can only connect to your Kafka cluster, but is not allowed to use any other services or login to the user interface. Consider using a refresh token mechanism first. You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID, username and the password used in authentication. The OAuth client will connect to the OAuth server, authenticate using the username, the password, the client ID, and optionally even the client secret to obtain an access token which it will use to authenticate with the Kafka broker. In the passwordSecret property, specify a link to a Secret containing the password. Normally, you also have to configure a clientId using a public OAuth client. If you are using a confidential OAuth client, you also have to configure a clientSecret . An example of OAuth client authentication using username and a password with a public client authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-public-client-id An example of OAuth client authentication using a username and a password with a confidential client authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-confidential-client-id clientSecret: secretName: my-confidential-client-oauth-secret key: client-secret Optionally, scope and audience can be specified if needed. TLS Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate. If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resource. The tlsTrustedCertificates property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format. An example of TLS certificates provided authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification. An example of disabled TLS hostname verification authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true 76.1. KafkaClientAuthenticationOAuth schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationOAuth type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain . It must have the value oauth for the type KafkaClientAuthenticationOAuth . Property Description accessToken Link to OpenShift Secret containing the access token which was obtained from the authorization server. GenericSecretSource accessTokenIsJwt Configure whether access token should be treated as JWT. This should be set to false if the authorization server returns opaque tokens. Defaults to true . boolean audience OAuth audience to use when authenticating against the authorization server. Some authorization servers require the audience to be explicitly set. The possible values depend on how the authorization server is configured. By default, audience is not specified when performing the token endpoint request. string clientId OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. GenericSecretSource connectTimeoutSeconds The connect timeout in seconds when connecting to authorization server. If not set, the effective connect timeout is 60 seconds. integer disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean enableMetrics Enable or disable OAuth metrics. Default value is false . boolean httpRetries The maximum number of retries to attempt if an initial HTTP request fails. If not set, the default is to not attempt any retries. integer httpRetryPauseMs The pause to take before retrying a failed HTTP request. If not set, the default is to not pause at all but to immediately repeat a request. integer maxTokenExpirySeconds Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens. integer passwordSecret Reference to the Secret which holds the password. PasswordSecretSource readTimeoutSeconds The read timeout in seconds when connecting to authorization server. If not set, the effective read timeout is 60 seconds. integer refreshToken Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server. GenericSecretSource scope OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default scope is not specified when doing the token endpoint request. string tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array tokenEndpointUri Authorization server token endpoint URI. string type Must be oauth . string username Username used for the authentication. string | [
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token",
"authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-public-client-id",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-confidential-client-id clientSecret: secretName: my-confidential-client-oauth-secret key: client-secret",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaclientauthenticationoauth-reference |
Chapter 21. console | Chapter 21. console This chapter describes the commands under the console command. 21.1. console log show Show server's console output Usage: Table 21.1. Positional arguments Value Summary <server> Server to show console log (name or id) Table 21.2. Command arguments Value Summary -h, --help Show this help message and exit --lines <num-lines> Number of lines to display from the end of the log (default=all) 21.2. console url show Show server's remote console URL Usage: Table 21.3. Positional arguments Value Summary <server> Server to show url (name or id) Table 21.4. Command arguments Value Summary -h, --help Show this help message and exit --novnc Show novnc console url (default) --xvpvnc Show xvpvnc console url --spice Show spice console url --rdp Show rdp console url --serial Show serial console url --mks Show webmks console url Table 21.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 21.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 21.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 21.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack console log show [-h] [--lines <num-lines>] <server>",
"openstack console url show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--novnc | --xvpvnc | --spice | --rdp | --serial | --mks] <server>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/console |
About | About Red Hat Advanced Cluster Management for Kubernetes 2.11 About 2.11 | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/about/index |
Chapter 7. Deploying SR-IOV technologies | Chapter 7. Deploying SR-IOV technologies In your Red Hat OpenStack Platform NFV deployment, you can achieve higher performance with single root I/O virtualization (SR-IOV), when you configure direct access from your instances to a shared PCIe resource through virtual resources. 7.1. Configuring SR-IOV To deploy Red Hat OpenStack Platform (RHOSP) with single root I/O virtualization (SR-IOV), configure the shared PCIe resources that have SR-IOV capabilities that instances can request direct access to. Note The following CPU assignments, memory allocation, and NIC configurations are examples, and might be different from your use case. Prerequisites For details on how to install and configure the undercloud before deploying the overcloud, see the Director Installation and Usage guide. Note Do not manually edit any values in /etc/tuned/cpu-partitioning-variables.conf that director heat templates modify. Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_compute_sriov.yaml that includes the Controller and ComputeSriov roles: ComputeSriov is a custom role provided with your RHOSP installation that includes the NeutronSriovAgent and NeutronSriovHostConfig services, in addition to the default compute services. To prepare the SR-IOV containers, include the neutron-sriov.yaml and roles_data_compute_sriov.yaml files when you generate the overcloud_images.yaml file. For more information on container image preparation, see Preparing container images in the Director Installation and Usage guide. Create a copy of the /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml file in your environment file directory: Add the following parameters under parameter_defaults in your network-environment-sriov.yaml file to configure the SR-IOV nodes for your cluster and your hardware configuration: To determine the vendor_id and product_id for each PCI device type, use one of the following commands on the physical server that has the PCI cards: To return the vendor_id and product_id from a deployed overcloud, use the following command: To return the vendor_id and product_id of a physical function (PF) if you have not yet deployed the overcloud, use the following command: Configure role specific parameters for SR-IOV compute nodes in your network-environment-sriov.yaml file: Note The NovaVcpuPinSet parameter is now deprecated, and is replaced by NovaComputeCpuDedicatedSet for dedicated, pinned workloads. Configure the PCI passthrough devices for the SR-IOV compute nodes in your network-environment-sriov.yaml file: Replace <vendor_id> with the vendor ID of the PCI device. Replace <product_id> with the product ID of the PCI device. Replace <NIC_address> with the address of the PCI device. For information about how to configure the address parameter, see Guidelines for configuring NovaPCIPassthrough in the Configuring the Compute Service for Instance Creation guide. Replace <physical_network> with the name of the physical network the PCI device is located on. Note Do not use the devname parameter when you configure PCI passthrough because the device name of a NIC can change. To create a Networking service (neutron) port on a PF, specify the vendor_id , the product_id , and the PCI device address in NovaPCIPassthrough , and create the port with the --vnic-type direct-physical option. To create a Networking service port on a virtual function (VF), specify the vendor_id and product_id in NovaPCIPassthrough , and create the port with the --vnic-type direct option. The values of the vendor_id and product_id parameters might be different between physical function (PF) and VF contexts. For more information about how to configure NovaPCIPassthrough , see Guidelines for configuring NovaPCIPassthrough in the Configuring the Compute Service for Instance Creation guide. Configure the SR-IOV enabled interfaces in the compute.yaml network configuration template. To create SR-IOV VFs, configure the interfaces as standalone NICs: Note The numvfs parameter replaces the NeutronSriovNumVFs parameter in the network configuration templates. Red Hat does not support modification of the NeutronSriovNumVFs parameter or the numvfs parameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that PF. In this case, you must hard reboot these instances to make the SR-IOV PCI device available again. Ensure that the list of default filters includes the value AggregateInstanceExtraSpecsFilter : Run the overcloud_deploy.sh script. 7.2. Configuring NIC partitioning You can reduce the number of NICs that you need for each host by configuring single root I/O virtualization (SR-IOV) virtual functions (VFs) for Red Hat OpenStack Platform (RHOSP) management networks and provider networks. When you partition a single, high-speed NIC into multiple VFs, you can use the NIC for both control and data plane traffic. This feature has been validated on Intel Fortville NICs, and Mellanox CX-5 NICs. Procedure Open the NIC config file for your chosen role. Add an entry for the interface type sriov_pf to configure a physical function that the host can use: Replace <interface_name> with the name of the interface. Replace <number_of_vfs> with the number of VFs. Optional: Replace <true/false> with true to set promiscuous mode, or false to disable promiscuous mode. The default value is true . Note The numvfs parameter replaces the NeutronSriovNumVFs parameter in the network configuration templates. Red Hat does not support modification of the NeutronSriovNumVFs parameter or the numvfs parameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that physical function (PF). In this case, you must hard reboot these instances to make the SR-IOV PCI device available again. Add an entry for the interface type sriov_vf to configure virtual functions that the host can use: Replace <bond_type> with the required bond type, for example, linux_bond . You can apply VLAN tags on the bond for other bonds, such as ovs_bond . Replace <bonding_option> with one of the following supported bond modes: active-backup Balance-slb Note LACP bonds are not supported. Specify the sriov_vf as the interface type to bond in the members section. Note If you are using an OVS bridge as the interface type, you can configure only one OVS bridge on the sriov_vf of a sriov_pf device. More than one OVS bridge on a single sriov_pf device can result in packet duplication across VFs, and decreased performance. Replace <pf_device_name> with the name of the PF device. If you use a linux_bond , you must assign VLAN tags. If you set a VLAN tag, ensure that you set a unique tag for each VF associated with a single sriov_pf device. You cannot have two VFs from the same PF on the same VLAN. Replace <vf_id> with the ID of the VF. The applicable VF ID range starts at zero, and ends at the maximum number of VFs minus one. Disable spoof checking. Apply VLAN tags on the sriov_vf for linux_bond over VFs. To reserve VFs for instances, include the NovaPCIPassthrough parameter in an environment file, for example: Director identifies the host VFs, and derives the PCI addresses of the VFs that are available to the instance. Enable IOMMU on all nodes that require NIC partitioning. For example, if you want NIC Partitioning for Compute nodes, enable IOMMU using the KernelArgs parameter for that role: Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs in the Configuring the Compute Service for Instance Creation guide. Add your role file and environment files to the stack with your other environment files and deploy the overcloud: Validation Log in to the overcloud Compute node as heat-admin and check the number of VFs: Show OVS connections: Log in to your OVS-DPDK SR-IOV Compute node as heat-admin and check Linux bonds: List OVS bonds: If you used NovaPCIPassthrough to pass VFs to instances, test by Deploying an instance for SR-IOV . 7.3. Example configurations for NIC partitions Linux bond over VFs The following example configures a Linux bond over VFs, disables spoofcheck , and applies VLAN tags to sriov_vf : OVS bridge on VFs The following example configures an OVS bridge on VFs: OVS user bridge on VFs The following example configures an OVS user bridge on VFs and applies VLAN tags to ovs_user_bridge : 7.4. Configuring OVS hardware offload The procedure for OVS hardware offload configuration shares many of the same steps as configuring SR-IOV. Note Since Red Hat OpenStack Platform 16.2.3, to offload traffic from Compute nodes with OVS hardware offload and ML2/OVS, you must set the disable_packet_marking parameter to true in the openvswitch_agent.ini configuration file, and then restart the neutron_ovs_agent container. + Procedure Generate an overcloud role for OVS hardware offload that is based on the Compute role: Optional: Change the HostnameFormatDefault: '%stackname%-compute-%index%' name for the ComputeOvsHwOffload role. Add the OvsHwOffload parameter under role-specific parameters with a value of true . To configure neutron to use the iptables/hybrid firewall driver implementation, include the line: NeutronOVSFirewallDriver: iptables_hybrid . For more information about NeutronOVSFirewallDriver , see Using the Open vSwitch Firewall in the Advanced Overcloud Customization Guide. Configure the physical_network parameter to match your environment. For VLAN, set the physical_network parameter to the name of the network you create in neutron after deployment. This value should also be in NeutronBridgeMappings . For VXLAN, set the physical_network parameter to null . Example: Replace <vendor-id> with the vendor ID of the physical NIC. Replace <product-id> with the product ID of the NIC VF. Replace <address> with the address of the physical NIC. For more information about how to configure NovaPCIPassthrough , see Guidelines for configuring NovaPCIPassthrough in the Configuring the Compute Service for Instance Creation guide. Ensure that the list of default filters includes NUMATopologyFilter : Note Optional: For details on how to troubleshoot and configure OVS Hardware Offload issues in RHOSP 16.2 with Mellanox ConnectX5 NICs, see Troubleshooting Hardware Offload . Configure one or more network interfaces intended for hardware offload in the compute-sriov.yaml configuration file: Note Do not use the NeutronSriovNumVFs parameter when configuring Open vSwitch hardware offload. The number of virtual functions is specified using the numvfs parameter in a network configuration file used by os-net-config . Red Hat does not support modifying the numvfs setting during update or redeployment. Do not configure Mellanox network interfaces as a nic-config interface type ovs-vlan because this prevents tunnel endpoints such as VXLAN from passing traffic due to driver limitations. Include the ovs-hw-offload.yaml file in the overcloud deploy command: Verification Confirm that a PCI device is in switchdev mode: Verify if offload is enabled in OVS: 7.5. Tuning examples for OVS hardware offload For optimal performance you must complete additional configuration steps. Adjusting the number of channels for each network interface to improve performance A channel includes an interrupt request (IRQ) and the set of queues that trigger the IRQ. When you set the mlx5_core driver to switchdev mode, the mlx5_core driver defaults to one combined channel, which might not deliver optimal performance. Procedure On the PF representors, enter the following command to adjust the number of CPUs available to the host. Replace USD(nproc) with the number of CPUs you want to make available: CPU pinning To prevent performance degradation from cross-NUMA operations, locate NICs, their applications, the VF guest, and OVS in the same NUMA node. For more information, see Configuring CPU pinning on Compute nodes in the Configuring the Compute Service for Instance Creation guide. 7.6. Configuring components of OVS hardware offload A reference for configuring and troubleshooting the components of OVS HW Offload with Mellanox smart NICs. Nova Configure the Nova scheduler to use the NovaPCIPassthrough filter with the NUMATopologyFilter and DerivePciWhitelistEnabled parameters. When you enable OVS HW Offload, the Nova scheduler operates similarly to SR-IOV passthrough for instance spawning. Neutron When you enable OVS HW Offload, use the devlink cli tool to set the NIC e-switch mode to switchdev . Switchdev mode establishes representor ports on the NIC that are mapped to the VFs. Procedure To allocate a port from a switchdev -enabled NIC, log in as an admin user, create a neutron port with a binding-profile value of capabilities , and disable port security: Pass this port information when you create the instance. You associate the representor port with the instance VF interface and connect the representor port to OVS bridge br-int for one-time OVS data path processing. A VF port representor functions like a software version of a physical "patch panel" front-end. For more information about new instance creation, see Deploying an instance for SR-IOV . OVS In an environment with hardware offload configured, the first packet transmitted traverses the OVS kernel path, and this packet journey establishes the ml2 OVS rules for incoming and outgoing traffic for the instance traffic. When the flows of the traffic stream are established, OVS uses the traffic control (TC) Flower utility to push these flows on the NIC hardware. Procedure Use director to apply the following configuration on OVS: Restart to enable HW Offload. Traffic Control (TC) subsystems When you enable the hw-offload flag, OVS uses the TC data path. TC Flower is an iproute2 utility that writes data path flows on hardware. This ensures that the flow is programmed on both the hardware and software data paths, for redundancy. Procedure Apply the following configuration. This is the default option if you do not explicitly configure tc-policy : Restart OVS. NIC PF and VF drivers Mlx5_core is the PF and VF driver for the Mellanox ConnectX-5 NIC. The mlx5_core driver performs the following tasks: Creates routing tables on hardware. Manages network flow management. Configures the Ethernet switch device driver model, switchdev . Creates block devices. Procedure Use the following devlink commands to query the mode of the PCI device. NIC firmware The NIC firmware performs the following tasks: Maintains routing tables and rules. Fixes the pipelines of the tables. Manages hardware resources. Creates VFs. The firmware works with the driver for optimal performance. Although the NIC firmware is non-volatile and persists after you reboot, you can modify the configuration during run time. Procedure Apply the following configuration on the interfaces, and the representor ports, to ensure that TC Flower pushes the flow programming at the port level: Note Ensure that you keep the firmware updated. Yum or dnf updates might not complete the firmware update. For more information, see your vendor documentation. 7.7. Troubleshooting OVS hardware offload Prerequisites Linux Kernel 4.13 or newer OVS 2.8 or newer RHOSP 12 or newer Iproute 4.12 or newer Mellanox NIC firmware, for example FW ConnectX-5 16.21.0338 or newer For more information about supported prerequisites, see see the Red Hat Knowledgebase solution Network Adapter Fast Datapath Feature Support Matrix . Configuring the network in an OVS HW offload deployment In a HW offload deployment, you can choose one of the following scenarios for your network configuration according to your requirements: You can base guest VMs on VXLAN and VLAN by using either the same set of interfaces attached to a bond, or a different set of NICs for each type. You can bond two ports of a Mellanox NIC by using Linux bond. You can host tenant VXLAN networks on VLAN interfaces on top of a Mellanox Linux bond. Ensure that individual NICs and bonds are members of an ovs-bridge. Refer to the below example network configuration: The following bonding configurations are supported: active-backup - mode=1 active-active or balance-xor - mode=2 802.3ad (LACP) - mode=4 The following bonding configuration is not supported: xmit_hash_policy=layer3+4 Verifying the interface configuration Verify the interface configuration with the following procedure. Procedure During deployment, use the host network configuration tool os-net-config to enable hw-tc-offload . Enable hw-tc-offload on the sriov_config service any time you reboot the Compute node. Set the hw-tc-offload parameter to on for the NICs that are attached to the bond:. Verifying the interface mode Verify the interface mode with the following procedure. Procedure Set the eswitch mode to switchdev for the interfaces you use for HW offload. Use the host network configuration tool os-net-config to enable eswitch during deployment. Enable eswitch on the sriov_config service any time you reboot the Compute node. Note The driver of the PF interface is set to "mlx5e_rep" , to show that it is a representor of the e-switch uplink port. This does not affect the functionality. Verifying the offload state in OVS Verify the offload state in OVS with the following procedure. Enable hardware offload in OVS in the Compute node. Verifying the name of the VF representor port To ensure consistent naming of VF representor ports, os-net-config uses udev rules to rename the ports in the <PF-name>_<VF_id> format. Procedure After deployment, verify that the VF representor ports are named correctly. Examining network traffic flow HW offloaded network flow functions in a similar way to physical switches or routers with application-specific integrated circuit (ASIC) chips. You can access the ASIC shell of a switch or router to examine the routing table and for other debugging. The following procedure uses a Broadcom chipset from a Cumulus Linux switch as an example. Replace the values that are appropriate to your environment. Procedure To get Broadcom chip table content, use the bcmcmd command. Inspect the Traffic Control (TC) Layer. Examine the in_hw flags and the statistics in this output. The word hardware indicates that the hardware processes the network traffic. If you use tc-policy=none , you can check this output or a tcpdump to investigate when hardware or software handles the packets. You can see a corresponding log message in dmesg or in ovs-vswitch.log when the driver is unable to offload packets. For Mellanox, as an example, the log entries resemble syndrome messages in dmesg . In this example, the error code (0x6b1266) represents the following behavior: Validating systems Validate your system with the following procedure. Procedure Ensure SR-IOV and VT-d are enabled on the system. Enable IOMMU in Linux by adding intel_iommu=on to kernel parameters, for example, using GRUB. Limitations You cannot use the OVS firewall driver with HW offload because the connection tracking properties of the flows are unsupported in the offload path in OVS 2.11. 7.8. Debugging hardware offload flow You can use the following procedure if you encounter the following message in the ovs-vswitch.log file: Procedure To enable logging on the offload modules and to get additional log information for this failure, use the following commands on the Compute node: Inspect the ovs-vswitchd logs again to see additional details about the issue. In the following example logs, the offload failed because of an unsupported attribute mark. Debugging Mellanox NICs Mellanox has provided a system information script, similar to a Red Hat SOS report. https://github.com/Mellanox/linux-sysinfo-snapshot/blob/master/sysinfo-snapshot.py When you run this command, you create a zip file of the relevant log information, which is useful for support cases. Procedure You can run this system information script with the following command: You can also install Mellanox Firmware Tools (MFT), mlxconfig, mlxlink and the OpenFabrics Enterprise Distribution (OFED) drivers. Useful CLI commands Use the ethtool utility with the following options to gather diagnostic information: ethtool -l <uplink representor> : View the number of channels ethtool -I <uplink/VFs> : Check statistics ethtool -i <uplink rep> : View driver information ethtool -g <uplink rep> : Check ring sizes ethtool -k <uplink/VFs> : View enabled features Use the tcpdump utility at the representor and PF ports to similarly check traffic flow. Any changes you make to the link state of the representor port, affect the VF link state also. Representor port statistics present VF statistics also. Use the below commands to get useful diagnostic information: 7.9. Deploying an instance for SR-IOV Use host aggregates to separate high performance compute hosts. For information on creating host aggregates and associated flavors for scheduling see Creating host aggregates . Note Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on Compute nodes in the Configuring the Compute Service for Instance Creation guide. Deploy an instance for single root I/O virtualization (SR-IOV) by performing the following steps: Procedure Create a flavor. Tip You can specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces by adding the extra spec hw:pci_numa_affinity_policy to your flavor. For more information, see Flavor metadata in the Configuring the Compute Service for Instance Creation guide. Create the network. Create the port. Use vnic-type direct to create an SR-IOV virtual function (VF) port. Use the following command to create a virtual function with hardware offload. You must be an admin user to set --binding-profile . Use vnic-type direct-physical to create an SR-IOV physical function (PF) port that is dedicated to a single instance. This PF port is a Networking service (neutron) port but is not controlled by the Networking service, and is not visible as a network adapter because it is a PCI device that is passed through to the instance. Deploy an instance. 7.10. Creating host aggregates For better performance, deploy guests that have CPU pinning and huge pages. You can schedule high performance instances on a subset of hosts by matching aggregate metadata with flavor metadata. Procedure You can configure the AggregateInstanceExtraSpecsFilter value, and other necessary filters, through the heat parameter NovaSchedulerEnabledFilters under parameter_defaults in your deployment templates. Note To add this parameter to the configuration of an existing cluster, you can add it to the heat templates, and run the original deployment script again. Create an aggregate group for SR-IOV, and add relevant hosts. Define metadata, for example, sriov=true , that matches defined flavor metadata. Create a flavor. Set additional flavor properties. Note that the defined metadata, sriov=true , matches the defined metadata on the SR-IOV aggregate. | [
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_compute_sriov.yaml Controller ComputeSriov",
"sudo openstack tripleo container image prepare --roles-file ~/templates/roles_data_compute_sriov.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml -e ~/containers-prepare-parameter.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml",
"cp /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml /home/stack/templates/network-environment-sriov.yaml",
"NeutronNetworkType: 'vlan' NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''",
"lspci -nn -s <pci_device_address> 3b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [<vendor_id>: <product_id>] (rev 02)",
"(undercloud) [stack@undercloud-0 ~]USD openstack baremetal introspection data save <baremetal_node_name> | jq '.inventory.interfaces[] | .name, .vendor, .product'",
"ComputeSriovParameters: IsolCpusList: \"1-19,21-39\" KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-19,21-39\" TunedProfileName: \"cpu-partitioning\" NeutronBridgeMappings: - tenant:br-link0 NeutronPhysicalDevMappings: - tenant:p7p1 NovaComputeCpuDedicatedSet: '1-19,21-39' NovaReservedHostMemory: 4096",
"ComputeSriovParameters: NovaPCIPassthrough: - vendor_id: \"<vendor_id>\" product_id: \"<product_id>\" address: <NIC_address> physical_network: \"<physical_network>\"",
"- type: sriov_pf name: p7p3 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false - type: sriov_pf name: p7p4 mtu: 9000 numvfs: 10 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false",
"NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','Serve rGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','AggregateInstanceExt raSpecsFilter']",
"- type: sriov_pf name: <interface_name> use_dhcp: false numvfs: <number_of_vfs> promisc: <true/false>",
"- type: <bond_type> name: internal_bond bonding_options: mode=<bonding_option> use_dhcp: false members: - type: sriov_vf device: <pf_device_name> vfid: <vf_id> - type: sriov_vf device: <pf_device_name> vfid: <vf_id> - type: vlan vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false device: internal_bond addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiInterfaceRoutes",
"NovaPCIPassthrough: - address: \"0000:19:0e.3\" trusted: \"true\" physical_network: \"sriov1\" - address: \"0000:19:0e.0\" trusted: \"true\" physical_network: \"sriov2\"",
"parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"",
"(undercloud)USD openstack overcloud deploy --templates -r os-net-config.yaml -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"[heat-admin@overcloud-compute-0 heat-admin]USD sudo cat /sys/class/net/p4p1/device/sriov_numvfs 10 [heat-admin@overcloud-compute-0 heat-admin]USD sudo cat /sys/class/net/p4p2/device/sriov_numvfs 10",
"[heat-admin@overcloud-compute-0]USD sudo ovs-vsctl show b6567fa8-c9ec-4247-9a08-cbf34f04c85f Manager \"ptcp:6640:127.0.0.1\" is_connected: true Bridge br-sriov2 Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br-sriov2 Interface phy-br-sriov2 type: patch options: {peer=int-br-sriov2} Port br-sriov2 Interface br-sriov2 type: internal Bridge br-sriov1 Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br-sriov1 Interface phy-br-sriov1 type: patch options: {peer=int-br-sriov1} Port br-sriov1 Interface br-sriov1 type: internal Bridge br-ex Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Bridge br-tenant Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port br-tenant tag: 305 Interface br-tenant type: internal Port phy-br-tenant Interface phy-br-tenant type: patch options: {peer=int-br-tenant} Port dpdkbond0 Interface dpdk0 type: dpdk options: {dpdk-devargs=\"0000:18:0e.0\"} Interface dpdk1 type: dpdk options: {dpdk-devargs=\"0000:18:0a.0\"} Bridge br-tun Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port vxlan-98140025 Interface vxlan-98140025 type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.37\"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port vxlan-98140015 Interface vxlan-98140015 type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.21\"} Port vxlan-9814009f Interface vxlan-9814009f type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.159\"} Port vxlan-981400cc Interface vxlan-981400cc type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.204\"} Bridge br-int Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port int-br-tenant Interface int-br-tenant type: patch options: {peer=phy-br-tenant} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port int-br-sriov1 Interface int-br-sriov1 type: patch options: {peer=phy-br-sriov1} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Port int-br-sriov2 Interface int-br-sriov2 type: patch options: {peer=phy-br-sriov2} Port vhu4142a221-93 tag: 1 Interface vhu4142a221-93 type: dpdkvhostuserclient options: {vhost-server-path=\"/var/lib/vhost_sockets/vhu4142a221-93\"} ovs_version: \"2.13.2\"",
"[heat-admin@overcloud-computeovsdpdksriov-1 ~]USD cat /proc/net/bonding/<bond_name> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eno3v1 MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0 Slave Interface: eno3v1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 4e:77:94:bd:38:d2 Slave queue ID: 0 Slave Interface: eno4v1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 4a:74:52:a7:aa:7c Slave queue ID: 0",
"[heat-admin@overcloud-computeovsdpdksriov-1 ~]USD sudo ovs-appctl bond/show ---- dpdkbond0 ---- bond_mode: balance-slb bond may use recirculation: no, Recirc-ID : -1 bond-hash-basis: 0 updelay: 0 ms downdelay: 0 ms next rebalance: 9491 ms lacp_status: off lacp_fallback_ab: false active slave mac: ce:ee:c7:58:8e:b2(dpdk1) slave dpdk0: enabled may_enable: true slave dpdk1: enabled active slave may_enable: true",
"- type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" members: - type: sriov_vf device: eno2 vfid: 1 vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false - type: sriov_vf device: eno3 vfid: 1 vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiInterfaceRoutes",
"- type: ovs_bridge name: br-bond use_dhcp: true members: - type: vlan vlan_id: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet routes: list_concat_unique: - get_param: ControlPlaneStaticRoutes - type: ovs_bond name: bond_vf ovs_options: \"bond_mode=active-backup\" members: - type: sriov_vf device: p2p1 vfid: 2 - type: sriov_vf device: p2p2 vfid: 2",
"- type: ovs_user_bridge name: br-link0 use_dhcp: false mtu: 9000 ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: list_concat_unique: - get_param: TenantInterfaceRoutes members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 ovs_extra: - set port dpdkbond0 bond_mode=balance-slb members: - type: ovs_dpdk_port name: dpdk0 members: - type: sriov_vf device: eno2 vfid: 3 - type: ovs_dpdk_port name: dpdk1 members: - type: sriov_vf device: eno3 vfid: 3",
"cat /var/lib/config-data/puppet-generated/neutron/ etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs] disable_packet_marking=True",
"openstack overcloud roles generate -o roles_data.yaml Controller Compute:ComputeOvsHwOffload",
"parameter_defaults: NeutronOVSFirewallDriver: iptables_hybrid ComputeSriovParameters: IsolCpusList: 2-9,21-29,11-19,31-39 KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=128 intel_iommu=on iommu=pt\" OvsHwOffload: true TunedProfileName: \"cpu-partitioning\" NeutronBridgeMappings: - tenant:br-tenant NovaPCIPassthrough: - vendor_id: <vendor-id> product_id: <product-id> address: <address> physical_network: \"tenant\" - vendor_id: <vendor-id> product_id: <product-id> address: <address> physical_network: \"null\" NovaReservedHostMemory: 4096 NovaComputeCpuDedicatedSet: 1-9,21-29,11-19,31-39",
"parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter",
"- type: ovs_bridge name: br-tenant mtu: 9000 members: - type: sriov_pf name: p7p1 numvfs: 5 mtu: 9000 primary: true promisc: true use_dhcp: false link_mode: switchdev",
"TEMPLATES_HOME=\"/usr/share/openstack-tripleo-heat-templates\" CUSTOM_TEMPLATES=\"/home/stack/templates\" openstack overcloud deploy --templates -r USD{CUSTOM_TEMPLATES}/roles_data.yaml -e USD{TEMPLATES_HOME}/environments/ovs-hw-offload.yaml -e USD{CUSTOM_TEMPLATES}/network-environment.yaml -e USD{CUSTOM_TEMPLATES}/neutron-ovs.yaml",
"devlink dev eswitch show pci/0000:03:00.0 pci/0000:03:00.0: mode switchdev inline-mode none encap enable",
"ovs-vsctl get Open_vSwitch . other_config:hw-offload \"true\"",
"sudo ethtool -L enp3s0f0 combined USD(nproc)",
"openstack port create --network private --vnic-type=direct --binding-profile '{\"capabilities\": [\"switchdev\"]}' direct_port1 --disable-port-security",
"sudo ovs-vsctl set Open_vSwitch . other_config:hw-offload=true",
"sudo ovs-vsctl set Open_vSwitch . other_config:tc-policy=none",
"sudo devlink dev eswitch set pci/0000:03:00.0 mode switchdev sudo devlink dev eswitch show pci/0000:03:00.0 pci/0000:03:00.0: mode switchdev inline-mode none encap enable",
"sudo ethtool -K enp3s0f0 hw-tc-offload on",
"- type: ovs_bridge name: br-offload mtu: 9000 use_dhcp: false members: - type: linux_bond name: bond-pf bonding_options: \"mode=active-backup miimon=100\" members: - type: sriov_pf name: p5p1 numvfs: 3 primary: true promisc: true use_dhcp: false defroute: false link_mode: switchdev - type: sriov_pf name: p5p2 numvfs: 3 promisc: true use_dhcp: false defroute: false link_mode: switchdev - type: vlan vlan_id: get_param: TenantNetworkVlanID device: bond-pf addresses: - ip_netmask: get_param: TenantIpSubnet",
"ethtool -k ens1f0 | grep tc-offload hw-tc-offload: on",
"devlink dev eswitch show pci/USD(ethtool -i ens1f0 | grep bus-info | cut -d ':' -f 2,3,4 | awk '{USD1=USD1};1')",
"ovs-vsctl get Open_vSwitch . other_config:hw-offload \"true\"",
"root@overcloud-computesriov-0 ~]# cat /etc/udev/rules.d/80-persistent-os-net-config.rules This file is autogenerated by os-net-config SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}!=\"\", ATTR{phys_port_name}==\"pf*vf*\", ENV{NM_UNMANAGED}=\"1\" SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", KERNELS==\"0000:65:00.0\", NAME=\"ens1f0\" SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}==\"98039b7f9e48\", ATTR{phys_port_name}==\"pf0vf*\", IMPORT{program}=\"/etc/udev/rep-link-name.sh USDattr{phys_port_name}\", NAME=\"ens1f0_USDenv{NUMBER}\" SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", KERNELS==\"0000:65:00.1\", NAME=\"ens1f1\" SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}==\"98039b7f9e49\", ATTR{phys_port_name}==\"pf1vf*\", IMPORT{program}=\"/etc/udev/rep-link-name.sh USDattr{phys_port_name}\", NAME=\"ens1f1_USDenv{NUMBER}\"",
"root@dni-7448-26:~# cl-bcmcmd l2 show mac=00:02:00:00:00:08 vlan=2000 GPORT=0x2 modid=0 port=2/xe1 mac=00:02:00:00:00:09 vlan=2000 GPORT=0x2 modid=0 port=2/xe1 Hit",
"tc -s filter show dev p5p1_1 ingress ... filter block 94 protocol ip pref 3 flower chain 5 filter block 94 protocol ip pref 3 flower chain 5 handle 0x2 eth_type ipv4 src_ip 172.0.0.1 ip_flags nofrag in_hw in_hw_count 1 action order 1: mirred (Egress Redirect to device eth4) stolen index 3 ref 1 bind 1 installed 364 sec used 0 sec Action statistics: Sent 253991716224 bytes 169534118 pkt (dropped 0, overlimits 0 requeues 0) Sent software 43711874200 bytes 30161170 pkt Sent hardware 210279842024 bytes 139372948 pkt backlog 0b 0p requeues 0 cookie 8beddad9a0430f0457e7e78db6e0af48 no_percpu",
"[13232.860484] mlx5_core 0000:3b:00.0: mlx5_cmd_check:756:(pid 131368): SET_FLOW_TABLE_ENTRY(0x936) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x6b1266)",
"0x6B1266 | set_flow_table_entry: pop vlan and forward to uplink is not allowed",
"2020-01-31T06:22:11.257Z|00473|dpif_netlink(handler402)|ERR|failed to offload flow: Operation not supported: p6p1_5",
"ovs-appctl vlog/set dpif_netlink:file:dbg Module name changed recently (check based on the version used ovs-appctl vlog/set netdev_tc_offloads:file:dbg [OR] ovs-appctl vlog/set netdev_offload_tc:file:dbg ovs-appctl vlog/set tc:file:dbg",
"2020-01-31T06:22:11.218Z|00471|dpif_netlink(handler402)|DBG|system@ovs-system: put[create] ufid:61bd016e-eb89-44fc-a17e-958bc8e45fda recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(7),skb_mark(0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=fa:16:3e:d2:f5:f3,dst=fa:16:3e:c4:a3:eb),eth_type(0x0800),ipv4(src=10.1.1.8/0.0.0.0,dst=10.1.1.31/0.0.0.0,proto=1/0,tos=0/0x3,ttl=64/0,frag=no),icmp(type=0/0,code=0/0), actions:set(tunnel(tun_id=0x3d,src=10.10.141.107,dst=10.10.141.124,ttl=64,tp_dst=4789,flags(df|key))),6 2020-01-31T06:22:11.253Z|00472|netdev_tc_offloads(handler402)|DBG|offloading attribute pkt_mark isn't supported 2020-01-31T06:22:11.257Z|00473|dpif_netlink(handler402)|ERR|failed to offload flow: Operation not supported: p6p1_5",
"./sysinfo-snapshot.py --asap --asap_tc --ibdiagnet --openstack",
"ovs-appctl dpctl/dump-flows -m type=offloaded ovs-appctl dpctl/dump-flows -m tc filter show dev ens1_0 ingress tc -s filter show dev ens1_0 ingress tc monitor",
"openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>",
"openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID> openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp",
"openstack port create --network net1 --vnic-type direct sriov_port",
"openstack port create --network net1 --vnic-type direct --binding-profile '{\"capabilities\": [\"switchdev\"]} sriov_hwoffload_port",
"openstack port create --network net1 --vnic-type direct-physical sriov_port",
"openstack server create --flavor <flavor> --image <image> --nic port-id=<id> <instance name>",
"parameter_defaults: NovaSchedulerEnabledFilters: - AggregateInstanceExtraSpecsFilter - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter",
"openstack aggregate create sriov_group openstack aggregate add host sriov_group compute-sriov-0.localdomain openstack aggregate set --property sriov=true sriov_group",
"openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>",
"openstack flavor set --property sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/deploy-sriov-tech_rhosp-nfv |
Chapter 22. Ceph Source | Chapter 22. Ceph Source Receive data from an Ceph Bucket, managed by a Object Storage Gateway. 22.1. Configuration Options The following table summarizes the configuration options available for the ceph-source Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key. string bucketName * Bucket Name The Ceph Bucket name. string cephUrl * Ceph Url Address Set the Ceph Object Storage Address Url. string "http://ceph-storage-address.com" secretKey * Secret Key The secret key. string zoneGroup * Bucket Zone Group The bucket zone group. string autoCreateBucket Autocreate Bucket Specifies to automatically create the bucket. boolean false delay Delay The number of milliseconds before the poll of the selected bucket. integer 500 deleteAfterRead Auto-delete Objects Specifies to delete objects after consuming them. boolean true ignoreBody Ignore Body If true, the Object body is ignored. Setting this to true overrides any behavior defined by the includeBody option. If false, the object is put in the body. boolean false includeBody Include Body If true, the exchange is consumed and put into the body and closed. If false, the Object stream is put raw into the body and the headers are set with the object metadata. boolean true prefix Prefix The bucket prefix to consider while searching. string "folder/" Note Fields marked with an asterisk (*) are mandatory. 22.2. Dependencies At runtime, the ceph-source Kamelet relies upon the presence of the following dependencies: camel:aws2-s3 camel:kamelet 22.3. Usage This section describes how you can use the ceph-source . 22.3.1. Knative Source You can use the ceph-source Kamelet as a Knative source by binding it to a Knative object. ceph-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 22.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 22.3.1.2. Procedure for using the cluster CLI Save the ceph-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f ceph-source-binding.yaml 22.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind ceph-source -p "source.accessKey=The Access Key" -p "source.bucketName=The Bucket Name" -p "source.cephUrl=http://ceph-storage-address.com" -p "source.secretKey=The Secret Key" -p "source.zoneGroup=The Bucket Zone Group" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 22.3.2. Kafka Source You can use the ceph-source Kamelet as a Kafka source by binding it to a Kafka topic. ceph-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 22.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 22.3.2.2. Procedure for using the cluster CLI Save the ceph-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f ceph-source-binding.yaml 22.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind ceph-source -p "source.accessKey=The Access Key" -p "source.bucketName=The Bucket Name" -p "source.cephUrl=http://ceph-storage-address.com" -p "source.secretKey=The Secret Key" -p "source.zoneGroup=The Bucket Zone Group" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 22.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/ceph-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: \"The Access Key\" bucketName: \"The Bucket Name\" cephUrl: \"http://ceph-storage-address.com\" secretKey: \"The Secret Key\" zoneGroup: \"The Bucket Zone Group\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f ceph-source-binding.yaml",
"kamel bind ceph-source -p \"source.accessKey=The Access Key\" -p \"source.bucketName=The Bucket Name\" -p \"source.cephUrl=http://ceph-storage-address.com\" -p \"source.secretKey=The Secret Key\" -p \"source.zoneGroup=The Bucket Zone Group\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: \"The Access Key\" bucketName: \"The Bucket Name\" cephUrl: \"http://ceph-storage-address.com\" secretKey: \"The Secret Key\" zoneGroup: \"The Bucket Zone Group\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f ceph-source-binding.yaml",
"kamel bind ceph-source -p \"source.accessKey=The Access Key\" -p \"source.bucketName=The Bucket Name\" -p \"source.cephUrl=http://ceph-storage-address.com\" -p \"source.secretKey=The Secret Key\" -p \"source.zoneGroup=The Bucket Zone Group\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/ceph-source |
5.2. Adding Dependent Modules | 5.2. Adding Dependent Modules Add a MANIFEST.MF file in the META-INF directory, and the core API dependencies for resource adapter with the following line. If your translator depends upon any other third party jar files, ensure a module exists and add the module name to the above MANIFEST.MF file. | [
"Dependencies: org.jboss.teiid.common-core,org.jboss.teiid.api,javax.api"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/adding_dependent_modules |
9.9. Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) | 9.9. Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) When a cluster node shuts down, Pacemaker's default response is to stop all resources running on that node and recover them elsewhere, even if the shutdown is a clean shutdown. As of Red Hat Enterprise Linux 7.8, you can configure Pacemaker so that when a node shuts down cleanly, the resources attached to the node will be locked to the node and unable to start elsewhere until they start again when the node that has shut down rejoins the cluster. This allows you to power down nodes during maintenance windows when service outages are acceptable without causing that node's resources to fail over to other nodes in the cluster. 9.9.1. Cluster Properties to Configure Resources to Remain Stopped on Clean Node Shutdown The ability to prevent resources from failing over on a clean node shutdown is implemented by means of the following cluster properties. shutdown-lock When this cluster property is set to the default value of false , the cluster will recover resources that are active on nodes being cleanly shut down. When this property is set to true , resources that are active on the nodes being cleanly shut down are unable to start elsewhere until they start on the node again after it rejoins the cluster. The shutdown-lock property will work for either cluster nodes or remote nodes, but not guest nodes. If shutdown-lock is set to true , you can remove the lock on one cluster resource when a node is down so that the resource can start elsewhere by performing a manual refresh on the node with the following command. Note that once the resources are unlocked, the cluster is free to move the resources elsewhere. You can control the likelihood of this occurring by using stickiness values or location preferences for the resource. Note A manual refresh will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. You can then perform a manual refresh on the remote node. shutdown-lock-limit When this cluster property is set to a time other than the default value of 0, resources will be available for recovery on other nodes if the node does not rejoin within the specified time since the shutdown was initiated. Note, however, that the time interval will not be checked any more often than the value of the cluster-recheck-interval cluster property. Note The shutdown-lock-limit property will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. After you run these commands, the resources that had been running on the remote node will be available for recovery on other nodes when the amount of time specified as the shutdown-lock-limit has passed. 9.9.2. Setting the shutdown-lock Cluster Property The following example sets the shutdown-lock cluster property to true in an example cluster and shows the effect this has when the node is shut down and started again. This example cluster consists of three nodes: z1.example.com , z2.example.com , and z3.example.com . Set the shutdown-lock property to to true and verify its value. In this example the shutdown-lock-limit property maintains its default value of 0. Check the status of the cluster. In this example, resources third and fifth are running on z1.example.com . Shut down z1.example.com , which will stop the resources that are running on that node. Running the pcs status command shows that node z1.example.com is offline and that the resources that had been running on z1.example.com are LOCKED while the node is down. Start cluster services again on z1.example.com so that it rejoins the cluster. Locked resources should get started on that node, although once they start they will not not necessarily remain on the same node. In this example, resouces third and fifth are recovered on node z1.example.com. | [
"pcs resource refresh resource --node node",
"pcs property set shutdown-lock=true pcs property list --all | grep shutdown-lock shutdown-lock: true shutdown-lock-limit: 0",
"pcs status Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z2.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com",
"pcs cluster stop z1.example.com Stopping Cluster (pacemaker) Stopping Cluster (corosync)",
"pcs status Node List: * Online: [ z2.example.com z3.example.com ] * OFFLINE: [ z1.example.com ] Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED) * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED)",
"pcs cluster start z1.example.com Starting Cluster",
"pcs status Node List: * Online: [ z1.example.com z2.example.com z3.example.com ] Full List of Resources: .. * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-shutdown-lock-haar |
Chapter 6. Managing user sessions | Chapter 6. Managing user sessions When users log into realms, Red Hat build of Keycloak maintains a user session for each user and remembers each client visited by the user within the session. Realm administrators can perform multiple actions on each user session: View login statistics for the realm. View active users and where they logged in. Log a user out of their session. Revoke tokens. Set up token timeouts. Set up session timeouts. 6.1. Administering sessions To see a top-level view of the active clients and sessions in Red Hat build of Keycloak, click Sessions from the menu. Sessions 6.1.1. Signing out all active sessions You can sign out all users in the realm. From the Action list, select Sign out all active sessions . All SSO cookies become invalid. Red Hat build of Keycloak notifies clients by using the Red Hat build of Keycloak OIDC client adapter of the logout event. Clients requesting authentication within active browser sessions must log in again. Client types such as SAML do not receive a back-channel logout request. Note Clicking Sign out all active sessions does not revoke outstanding access tokens. Outstanding tokens must expire naturally. For clients using the Red Hat build of Keycloak OIDC client adapter, you can push a revocation policy to revoke the token, but this does not work for other adapters. 6.1.2. Viewing client sessions Procedure Click Clients in the menu. Click the Sessions tab. Click a client to see that client's sessions. Client sessions 6.1.3. Viewing user sessions Procedure Click Users in the menu. Click the Sessions tab. Click a user to see that user's sessions. User sessions 6.2. Revoking active sessions If your system is compromised, you can revoke all active sessions and access tokens. Procedure Click Sessions in the menu. From the Actions list, select Revocation . Revocation Specify a time and date where sessions or tokens issued before that time and date are invalid using this console. Click Set to now to set the policy to the current time and date. Click Push to push this revocation policy to any registered OIDC client with the Red Hat build of Keycloak OIDC client adapter. 6.3. Session and token timeouts Red Hat build of Keycloak includes control of the session, cookie, and token timeouts through the Sessions and Tokens tabs in the Realm settings menu. Sessions tab Configuration Description SSO Session Idle This setting is for OIDC clients only. If a user is inactive for longer than this timeout, the user session is invalidated. This timeout value resets when clients request authentication or send a refresh token request. Red Hat build of Keycloak adds a window of time to the idle timeout before the session invalidation takes effect. See the note later in this section. SSO Session Max The maximum time before a user session expires. SSO Session Idle Remember Me This setting is similar to the standard SSO Session Idle configuration but specific to logins with Remember Me enabled. Users can specify longer session idle timeouts when they click Remember Me when logging in. This setting is an optional configuration and, if its value is not greater than zero, it uses the same idle timeout as the SSO Session Idle configuration. SSO Session Max Remember Me This setting is similar to the standard SSO Session Max but specific to Remember Me logins. Users can specify longer sessions when they click Remember Me when logging in. This setting is an optional configuration and, if its value is not greater than zero, it uses the same session lifespan as the SSO Session Max configuration. Client Session Idle Idle timeout for the client session. If the user is inactive for longer than this timeout, the client session is invalidated and the refresh token requests bump the idle timeout. This setting never affects the general SSO user session, which is unique. Note the SSO user session is the parent of zero or more client sessions, one client session is created for every different client app the user logs in. This value should specify a shorter idle timeout than the SSO Session Idle . Users can override it for individual clients in the Advanced Settings client tab. This setting is an optional configuration and, when set to zero, uses the same idle timeout in the SSO Session Idle configuration. Client Session Max The maximum time for a client session and before a refresh token expires and invalidates. As in the option, this setting never affects the SSO user session and should specify a shorter value than the SSO Session Max . Users can override it for individual clients in the Advanced Settings client tab. This setting is an optional configuration and, when set to zero, uses the same max timeout in the SSO Session Max configuration. Offline Session Idle This setting is for offline access . The amount of time the session remains idle before Red Hat build of Keycloak revokes its offline token. Red Hat build of Keycloak adds a window of time to the idle timeout before the session invalidation takes effect. See the note later in this section. Offline Session Max Limited This setting is for offline access . If this flag is Enabled , Offline Session Max can control the maximum time the offline token remains active, regardless of user activity. If the flag is Disabled , offline sessions never expire by lifespan, only by idle. Once this option is activated, the Offline Session Max (global option at realm level) and Client Offline Session Max (specific client level option in the Advanced Settings tab) can be configured. Offline Session Max This setting is for offline access , and it is the maximum time before Red Hat build of Keycloak revokes the corresponding offline token. This option controls the maximum amount of time the offline token remains active, regardless of user activity. Login timeout The total time a logging in must take. If authentication takes longer than this time, the user must start the authentication process again. Login action timeout The Maximum time users can spend on any one page during the authentication process. Tokens tab Configuration Description Default Signature Algorithm The default algorithm used to assign tokens for the realm. Revoke Refresh Token When Enabled , Red Hat build of Keycloak revokes refresh tokens and issues another token that the client must use. This action applies to OIDC clients performing the refresh token flow. Access Token Lifespan When Red Hat build of Keycloak creates an OIDC access token, this value controls the lifetime of the token. Access Token Lifespan For Implicit Flow With the Implicit Flow, Red Hat build of Keycloak does not provide a refresh token. A separate timeout exists for access tokens created by the Implicit Flow. Client login timeout The maximum time before clients must finish the Authorization Code Flow in OIDC. User-Initiated Action Lifespan The maximum time before a user's action permission expires. Keep this value short because users generally react to self-created actions quickly. Default Admin-Initiated Action Lifespan The maximum time before an action permission sent to a user by an administrator expires. Keep this value long to allow administrators to send e-mails to offline users. An administrator can override the default timeout before issuing the token. Email Verification Specifies independent timeout for email verification. IdP account email verification Specifies independent timeout for IdP account email verification. Forgot password Specifies independent timeout for forgot password. Execute actions Specifies independent timeout for execute actions. Note For idle timeouts, a two-minute window of time exists that the session is active. For example, when you have the timeout set to 30 minutes, it will be 32 minutes before the session expires. This action is necessary for some scenarios in cluster and cross-data center environments where the token refreshes on one cluster node a short time before the expiration and the other cluster nodes incorrectly consider the session as expired because they have not yet received the message about a successful refresh from the refreshing node. 6.4. Offline access During offline access logins, the client application requests an offline token instead of a refresh token. The client application saves this offline token and can use it for future logins if the user logs out. This action is useful if your application needs to perform offline actions on behalf of the user even when the user is not online. For example, a regular data backup. The client application is responsible for persisting the offline token in storage and then using it to retrieve new access tokens from the Red Hat build of Keycloak server. The difference between a refresh token and an offline token is that an offline token never expires and is not subject to the SSO Session Idle timeout and SSO Session Max lifespan. The offline token is valid after a user logout or server restart. You must use the offline token for a refresh token action at least once per thirty days or for the value of the Offline Session Idle . If you enable Offline Session Max Limited , offline tokens expire after 60 days even if you use the offline token for a refresh token action. You can change this value, Offline Session Max , in the Admin Console. When using offline access, client idle and max timeouts can be overridden at the client level . The options Client Offline Session Idle and Client Offline Session Max , in the client Advanced Settings tab, allow you to have a shorter offline timeouts for a specific application. Note that client session values also control the refresh token expiration but they never affect the global offline user SSO session. The option Client Offline Session Max is only evaluated in the client if Offline Session Max Limited is Enabled at the realm level. If you enable the Revoke Refresh Token option, you can use each offline token once only. After refresh, you must store the new offline token from the refresh response instead of the one. Users can view and revoke offline tokens that Red Hat build of Keycloak grants them in the User Account Console . Administrators can revoke offline tokens for individual users in the Admin Console in the Consents tab. Administrators can view all offline tokens issued in the Offline Access tab of each client. Administrators can revoke offline tokens by setting a revocation policy . To issue an offline token, users must have the role mapping for the realm-level offline_access role. Clients must also have that role in their scope. Clients must add an offline_access client scope as an Optional client scope to the role, which is done by default. Clients can request an offline token by adding the parameter scope=offline_access when sending their authorization request to Red Hat build of Keycloak. The Red Hat build of Keycloak OIDC client adapter automatically adds this parameter when you use it to access your application's secured URL (such as, http://localhost:8080/customer-portal/secured?scope=offline_access). The Direct Access Grant and Service Accounts support offline tokens if you include scope=offline_access in the authentication request body. Offline sessions are besides the Infinispan caches stored also in the database. Whenever the Red Hat build of Keycloak server is restarted or an offline session is evicted from the Infinispan cache, it is still available in the database. Any following attempt to access the offline session will load the session from the database, and also import it to the Infinispan cache. To reduce memory requirements, we introduced a configuration option to shorten lifespan for imported offline sessions. Such sessions will be evicted from the Infinispan caches after the specified lifespan, but still available in the database. This will lower memory consumption, especially for deployments with a large number of offline sessions. Currently, the offline session lifespan override is disabled by default. To specify the lifespan override for offline user sessions, start Red Hat build of Keycloak server with the following parameter: --spi-user-sessions-infinispan-offline-session-cache-entry-lifespan-override=<lifespan-in-seconds> Similarly for offline client sessions: --spi-user-sessions-infinispan-offline-client-session-cache-entry-lifespan-override=<lifespan-in-seconds> 6.5. Offline sessions preloading In addition to Infinispan caches, offline sessions are stored in a database which means they will be available even after server restart. By default, the offline sessions are not preloaded from the database into the Infinispan caches during the server startup, because this approach has a drawback if there are many offline sessions to be preloaded. It can significantly slow down the server startup time. Therefore, the offline sessions are lazily fetched from the database by default. However, Red Hat build of Keycloak can be configured to preload the offline sessions from the database into the Infinispan caches during the server startup. It can be achieved by setting preloadOfflineSessionsFromDatabase property in the userSessions SPI to true . This functionality is currently deprecated and will be removed in a future release. The following example shows how to configure offline sessions preloading. bin/kc.[sh|bat] start --features-enabled offline-session-preloading --spi-user-sessions-infinispan-preload-offline-sessions-from-database=true 6.6. Transient sessions You can conduct transient sessions in Red Hat build of Keycloak. When using transient sessions, Red Hat build of Keycloak does not create a user session after successful authentication. Red Hat build of Keycloak creates a temporary, transient session for the scope of the current request that successfully authenticates the user. Red Hat build of Keycloak can run protocol mappers using transient sessions after authentication. The sid and session_state of the tokens are usually empty when the token is issued with transient sessions. So during transient sessions, the client application cannot refresh tokens or validate a specific session. Sometimes these actions are unnecessary, so you can avoid the additional resource use of persisting user sessions. This session saves performance, memory, and network communication (in cluster and cross-data center environments) resources. At this moment, transient sessions are automatically used just during service account authentication with disabled token refresh. Note that token refresh is automatically disabled during service account authentication unless explicitly enabled by client switch Use refresh tokens for client credentials grant . | [
"--spi-user-sessions-infinispan-offline-session-cache-entry-lifespan-override=<lifespan-in-seconds>",
"--spi-user-sessions-infinispan-offline-client-session-cache-entry-lifespan-override=<lifespan-in-seconds>",
"bin/kc.[sh|bat] start --features-enabled offline-session-preloading --spi-user-sessions-infinispan-preload-offline-sessions-from-database=true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/managing_user_sessions |
Chapter 236. MyBatis Component | Chapter 236. MyBatis Component Available as of Camel version 2.7 The mybatis: component allows you to query, poll, insert, update and delete data in a relational database using MyBatis . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mybatis</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 236.1. URI format mybatis:statementName[?options] Where statementName is the statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you wish to evaluate. You can append query options to the URI in the following format, ?option=value&option=value&... This component will by default load the MyBatis SqlMapConfig file from the root of the classpath with the expected name of SqlMapConfig.xml . If the file is located in another location, you will need to configure the configurationUri option on the MyBatisComponent component. 236.2. Options The MyBatis component supports 3 options, which are listed below. Name Description Default Type sqlSessionFactory (advanced) To use the SqlSessionFactory SqlSessionFactory configurationUri (common) Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath SqlMapConfig.xml String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The MyBatis endpoint is configured using URI syntax: with the following path and query parameters: 236.2.1. Path Parameters (1 parameters): Name Description Default Type statement Required The statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you wish to evaluate. String 236.2.2. Query Parameters (29 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean maxMessagesPerPoll (consumer) This option is intended to split results returned by the database pool into the batches and deliver them in multiple exchanges. This integer defines the maximum messages to deliver in single exchange. By default, no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disable it. 0 int onConsume (consumer) Statement to run after data has been processed in the route String routeEmptyResultSet (consumer) Whether allow empty resultset to be routed to the hop false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean transacted (consumer) Enables or disables transaction. If enabled then if processing an exchange failed then the consumer break out processing any further exchanges to cause a rollback eager false boolean useIterator (consumer) Process resultset individually or as a list true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy processingStrategy (consumer) To use a custom MyBatisProcessingStrategy MyBatisProcessing Strategy executorType (producer) The executor type to be used while executing statements. simple - executor does nothing special. reuse - executor reuses prepared statements. batch - executor reuses statements and batches updates. SIMPLE ExecutorType inputHeader (producer) User the header value for input parameters instead of the message body. By default, inputHeader == null and the input parameters are taken from the message body. If outputHeader is set, the value is used and query parameters will be taken from the header instead of the body. String outputHeader (producer) Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. Setting outputHeader will also omit populating the default CamelMyBatisResult header since it would be the same as outputHeader all the time. String statementType (producer) Mandatory to specify for the producer to control which kind of operation to invoke. StatementType synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 236.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.mybatis.configuration-uri Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath SqlMapConfig.xml String camel.component.mybatis.enabled Enable mybatis component true Boolean camel.component.mybatis.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.mybatis.sql-session-factory To use the SqlSessionFactory. The option is a org.apache.ibatis.session.SqlSessionFactory type. String 236.4. Message Headers Camel will populate the result message, either IN or OUT with a header with the statement used: Header Type Description CamelMyBatisStatementName String The statementName used (for example: insertAccount). CamelMyBatisResult Object The response returned from MtBatis in any of the operations. For instance an INSERT could return the auto-generated key, or number of rows etc. 236.5. Message Body The response from MyBatis will only be set as the body if it's a SELECT statement. That means, for example, for INSERT statements Camel will not replace the body. This allows you to continue routing and keep the original body. The response from MyBatis is always stored in the header with the key CamelMyBatisResult . 236.6. Samples For example if you wish to consume beans from a JMS queue and insert them into a database you could do the following: from("activemq:queue:newAccount") .to("mybatis:insertAccount?statementType=Insert"); Notice we have to specify the statementType , as we need to instruct Camel which kind of operation to invoke. Where insertAccount is the MyBatis ID in the SQL mapping file: <!-- Insert example, using the Account parameter class --> <insert id="insertAccount" parameterType="Account"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert> 236.7. Using StatementType for better control of MyBatis When routing to an MyBatis endpoint you will want more fine grained control so you can control whether the SQL statement to be executed is a SELECT , UPDATE , DELETE or INSERT etc. So for instance if we want to route to an MyBatis endpoint in which the IN body contains parameters to a SELECT statement we can do: In the code above we can invoke the MyBatis statement selectAccountById and the IN body should contain the account id we want to retrieve, such as an Integer type. We can do the same for some of the other operations, such as SelectList : And the same for UPDATE , where we can send an Account object as the IN body to MyBatis: 236.7.1. Using InsertList StatementType Available as of Camel 2.10 MyBatis allows you to insert multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: Then you can insert multiple rows, by sending a Camel message to the mybatis endpoint which uses the InsertList statement type, as shown below: 236.7.2. Using UpdateList StatementType Available as of Camel 2.11 MyBatis allows you to update multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <update id="batchUpdateAccount" parameterType="java.util.Map"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item="Account" collection="list" open="(" close=")" separator=","> #{Account.id} </foreach> </update> Then you can update multiple rows, by sending a Camel message to the mybatis endpoint which uses the UpdateList statement type, as shown below: from("direct:start") .to("mybatis:batchUpdateAccount?statementType=UpdateList") .to("mock:result"); 236.7.3. Using DeleteList StatementType Available as of Camel 2.11 MyBatis allows you to delete multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <delete id="batchDeleteAccountById" parameterType="java.util.List"> delete from ACCOUNT where ACC_ID in <foreach item="AccountID" collection="list" open="(" close=")" separator=","> #{AccountID} </foreach> </delete> Then you can delete multiple rows, by sending a Camel message to the mybatis endpoint which uses the DeleteList statement type, as shown below: from("direct:start") .to("mybatis:batchDeleteAccount?statementType=DeleteList") .to("mock:result"); 236.7.4. Notice on InsertList, UpdateList and DeleteList StatementTypes Parameter of any type (List, Map, etc.) can be passed to mybatis and an end user is responsible for handling it as required with the help of mybatis dynamic queries capabilities. 236.7.5. Scheduled polling example This component supports scheduled polling and can therefore be used as a Polling Consumer. For example to poll the database every minute: from("mybatis:selectAllAccounts?delay=60000") .to("activemq:queue:allAccounts"); See "ScheduledPollConsumer Options" on Polling Consumer for more options. Alternatively you can use another mechanism for triggering the scheduled polls, such as the Timer or Quartz components. In the sample below we poll the database, every 30 seconds using the Timer component and send the data to the JMS queue: from("timer://pollTheDatabase?delay=30000") .to("mybatis:selectAllAccounts") .to("activemq:queue:allAccounts"); And the MyBatis SQL mapping file used: <!-- Select with no parameters using the result map for Account class. --> <select id="selectAllAccounts" resultMap="AccountResult"> select * from ACCOUNT </select> 236.7.6. Using onConsume This component supports executing statements after data have been consumed and processed by Camel. This allows you to do post updates in the database. Notice all statements must be UPDATE statements. Camel supports executing multiple statements whose names should be separated by commas. The route below illustrates we execute the consumeAccount statement data is processed. This allows us to change the status of the row in the database to processed, so we avoid consuming it twice or more. And the statements in the sqlmap file: 236.7.7. Participating in transactions Setting up a transaction manager under camel-mybatis can be a little bit fiddly, as it involves externalising the database configuration outside the standard MyBatis SqlMapConfig.xml file. The first part requires the setup of a DataSource . This is typically a pool (either DBCP, or c3p0), which needs to be wrapped in a Spring proxy. This proxy enables non-Spring use of the DataSource to participate in Spring transactions (the MyBatis SqlSessionFactory does just this). <bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy"> <constructor-arg> <bean class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="driverClass" value="org.postgresql.Driver"/> <property name="jdbcUrl" value="jdbc:postgresql://localhost:5432/myDatabase"/> <property name="user" value="myUser"/> <property name="password" value="myPassword"/> </bean> </constructor-arg> </bean> This has the additional benefit of enabling the database configuration to be externalised using property placeholders. A transaction manager is then configured to manage the outermost DataSource : <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"/> </bean> A mybatis-spring SqlSessionFactoryBean then wraps that same DataSource : <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <!-- standard mybatis config file --> <property name="configLocation" value="/META-INF/SqlMapConfig.xml"/> <!-- externalised mappers --> <property name="mapperLocations" value="classpath*:META-INF/mappers/**/*.xml"/> </bean> The camel-mybatis component is then configured with that factory: <bean id="mybatis" class="org.apache.camel.component.mybatis.MyBatisComponent"> <property name="sqlSessionFactory" ref="sqlSessionFactory"/> </bean> Finally, a transaction policy is defined over the top of the transaction manager, which can then be used as usual: <bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager"/> <property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/> </bean> <camelContext id="my-model-context" xmlns="http://camel.apache.org/schema/spring"> <route id="insertModel"> <from uri="direct:insert"/> <transacted ref="PROPAGATION_REQUIRED"/> <to uri="mybatis:myModel.insert?statementType=Insert"/> </route> </camelContext> | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mybatis</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"mybatis:statementName[?options]",
"mybatis:statement",
"from(\"activemq:queue:newAccount\") .to(\"mybatis:insertAccount?statementType=Insert\");",
"<!-- Insert example, using the Account parameter class --> <insert id=\"insertAccount\" parameterType=\"Account\"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert>",
"<update id=\"batchUpdateAccount\" parameterType=\"java.util.Map\"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item=\"Account\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{Account.id} </foreach> </update>",
"from(\"direct:start\") .to(\"mybatis:batchUpdateAccount?statementType=UpdateList\") .to(\"mock:result\");",
"<delete id=\"batchDeleteAccountById\" parameterType=\"java.util.List\"> delete from ACCOUNT where ACC_ID in <foreach item=\"AccountID\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{AccountID} </foreach> </delete>",
"from(\"direct:start\") .to(\"mybatis:batchDeleteAccount?statementType=DeleteList\") .to(\"mock:result\");",
"from(\"mybatis:selectAllAccounts?delay=60000\") .to(\"activemq:queue:allAccounts\");",
"from(\"timer://pollTheDatabase?delay=30000\") .to(\"mybatis:selectAllAccounts\") .to(\"activemq:queue:allAccounts\");",
"<!-- Select with no parameters using the result map for Account class. --> <select id=\"selectAllAccounts\" resultMap=\"AccountResult\"> select * from ACCOUNT </select>",
"<bean id=\"dataSource\" class=\"org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy\"> <constructor-arg> <bean class=\"com.mchange.v2.c3p0.ComboPooledDataSource\"> <property name=\"driverClass\" value=\"org.postgresql.Driver\"/> <property name=\"jdbcUrl\" value=\"jdbc:postgresql://localhost:5432/myDatabase\"/> <property name=\"user\" value=\"myUser\"/> <property name=\"password\" value=\"myPassword\"/> </bean> </constructor-arg> </bean>",
"<bean id=\"txManager\" class=\"org.springframework.jdbc.datasource.DataSourceTransactionManager\"> <property name=\"dataSource\" ref=\"dataSource\"/> </bean>",
"<bean id=\"sqlSessionFactory\" class=\"org.mybatis.spring.SqlSessionFactoryBean\"> <property name=\"dataSource\" ref=\"dataSource\"/> <!-- standard mybatis config file --> <property name=\"configLocation\" value=\"/META-INF/SqlMapConfig.xml\"/> <!-- externalised mappers --> <property name=\"mapperLocations\" value=\"classpath*:META-INF/mappers/**/*.xml\"/> </bean>",
"<bean id=\"mybatis\" class=\"org.apache.camel.component.mybatis.MyBatisComponent\"> <property name=\"sqlSessionFactory\" ref=\"sqlSessionFactory\"/> </bean>",
"<bean id=\"PROPAGATION_REQUIRED\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\"/> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_REQUIRED\"/> </bean> <camelContext id=\"my-model-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"insertModel\"> <from uri=\"direct:insert\"/> <transacted ref=\"PROPAGATION_REQUIRED\"/> <to uri=\"mybatis:myModel.insert?statementType=Insert\"/> </route> </camelContext>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/mybatis-component |
Chapter 143. KafkaRebalanceStatus schema reference | Chapter 143. KafkaRebalanceStatus schema reference Used in: KafkaRebalance Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer sessionId The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations. string optimizationResult A JSON object describing the optimization result. map | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaRebalanceStatus-reference |
Chapter 4. Uninstalling OpenShift Data Foundation from external storage system | Chapter 4. Uninstalling OpenShift Data Foundation from external storage system Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful Note The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode. The below table provides information on the different values that can used with these annotations: Table 4.1. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively You can change the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation. Delete the OBCs. Delete the PVCs. Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. Remove CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely: In the OpenShift Container Platform Web Console, click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 4.1. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. List the pods consuming the PVC. In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 4.2. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry should have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 4.3. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC 4.4. Removing external IBM FlashSystem secret You need to clean up the FlashSystem secret from OpenShift Data Foundation while uninstalling. This secret is created when you configure the external IBM FlashSystem Storage. For more information, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage . Procedure Remove the IBM FlashSystem secret by using the following command: | [
"oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated",
"oc get volumesnapshot --all-namespaces",
"oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>",
"#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done",
"oc delete obc <obc name> -n <project name>",
"oc delete pvc <pvc name> -n <project-name>",
"oc delete -n openshift-storage storagesystem --all --wait=true",
"oc project default oc delete project openshift-storage --wait=true --timeout=5m",
"oc get project openshift-storage",
"oc get pv oc delete pv <pv name>",
"oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .",
". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Terminating 0 10h pod/alertmanager-main-1 3/3 Terminating 0 10h pod/alertmanager-main-2 3/3 Terminating 0 10h pod/cluster-monitoring-operator-84cd9df668-zhjfn 1/1 Running 0 18h pod/grafana-5db6fd97f8-pmtbf 2/2 Running 0 10h pod/kube-state-metrics-895899678-z2r9q 3/3 Running 0 10h pod/node-exporter-4njxv 2/2 Running 0 18h pod/node-exporter-b8ckz 2/2 Running 0 11h pod/node-exporter-c2vp5 2/2 Running 0 18h pod/node-exporter-cq65n 2/2 Running 0 18h pod/node-exporter-f5sm7 2/2 Running 0 11h pod/node-exporter-f852c 2/2 Running 0 18h pod/node-exporter-l9zn7 2/2 Running 0 11h pod/node-exporter-ngbs8 2/2 Running 0 18h pod/node-exporter-rv4v9 2/2 Running 0 18h pod/openshift-state-metrics-77d5f699d8-69q5x 3/3 Running 0 10h pod/prometheus-adapter-765465b56-4tbxx 1/1 Running 0 10h pod/prometheus-adapter-765465b56-s2qg2 1/1 Running 0 10h pod/prometheus-k8s-0 6/6 Terminating 1 9m47s pod/prometheus-k8s-1 6/6 Terminating 1 9m47s pod/prometheus-operator-cbfd89f9-ldnwc 1/1 Running 0 43m pod/telemeter-client-7b5ddb4489-2xfpz 3/3 Running 0 10h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0 Bound pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1 Bound pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2 Bound pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0 Bound pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1 Bound pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h",
"oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m",
"oc edit configs.imageregistry.operator.openshift.io",
". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .",
". . . storage: emptyDir: {} . . .",
"oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m",
"oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m",
"oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m",
"oc delete secret -n openshift-storage ibm-flashsystem-storage"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_in_external_mode/uninstalling-openshift-data-foundation-external-in-external-mode_rhodf |
15.7. Password Management of GVFS Mounts | 15.7. Password Management of GVFS Mounts A typical GVFS mount asks for credentials on its activation unless the resource allows anonymous authentication or does not require any at all. Presented in a standard GTK+ dialog, the user is able to choose whether the password should be saved or not. Procedure 15.5. Example: Authenticated Mount Process Open Files and activate the address bar by pressing Ctrl + L . Enter a well-formed URI string of a service that needs authentication (for example, sftp://localhost/ ). The credentials dialog is displayed, asking for a user name, password and password store options. Fill in the credentials and confirm. In case the persistent storage is selected, the password is saved in the user keyring. GNOME Keyring is a central place for secrets storage. It is encrypted and automatically unlocked on desktop session start using the password provided on login by default. If it is protected by a different password, the password is set at the first use. To manage the stored password and GNOME Keyring itself, the Seahorse application is provided. It allows individual records to be removed or passwords changed. For more information on Seahorse , consult the help manual for Seahorse embedded directly in the desktop. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/pswd-management |
8.2.2. IPsec Interfaces | 8.2.2. IPsec Interfaces With Red Hat Enterprise Linux it is possible to connect to other hosts or networks using a secure IP connection, known as IPsec. For instructions on setting up IPsec using the Network Administration Tool ( system-config-network ), refer to the chapter titled Network Configuration in the System Administrators Guide . For instructions on setting up IPsec manually, refer to the chapter titled Virtual Private Networks in the Security Guide . The following example shows the ifcfg file for a network-to-network IPsec connection for LAN A. The unique name to identify the connection in this example is ipsec1 , so the resulting file is named /etc/sysconfig/network-scripts/ifcfg-ipsec1 . In the example above, X.X.X.X is the publicly routable IP address of the destination IPsec router. Below is a listing of the configurable parameters for an IPsec interface: DST= <address> , where <address> is the IP address of the IPsec destination host or router. This is used for both host-to-host and network-to-network IPsec configurations. DSTNET= <network> , where <network> is the network address of the IPsec destination network. This is only used for network-to-network IPsec configurations. SRC= <address> , where <address> is the IP address of the IPsec source host or router. This setting is optional and is only used for host-to-host IPsec configurations. SRCNET= <network> , where <network> is the network address of the IPsec source network. This is only used for network-to-network IPsec configurations. TYPE= <interface-type> , where <interface-type> is IPSEC . Both applications are part of the ipsec-tools package. Refer to /usr/share/doc/initscripts- <version-number> /sysconfig.txt (replace <version-number> with the version of the initscripts package installed) for configuration parameters if using manual key encryption with IPsec. The racoon IKEv1 key management daemon negotiates and configures a set of parameters for IPSec. It can use preshared keys, RSA signatures, or GSS-API. If racoon is used to automatically manage key encryption, the following options are required: IKE_METHOD= <encryption-method> , where <encryption-method> is either PSK , X509 , or GSSAPI . If PSK is specified, the IKE_PSK parameter must also be set. If X509 is specified, the IKE_CERTFILE parameter must also be set. IKE_PSK= <shared-key> , where <shared-key> is the shared, secret value for the PSK (preshared keys) method. IKE_CERTFILE= <cert-file> , where <cert-file> is a valid X.509 certificate file for the host. IKE_PEER_CERTFILE= <cert-file> , where <cert-file> is a valid X.509 certificate file for the remote host. IKE_DNSSEC= <answer> , where <answer> is yes . The racoon daemon retrieves the remote host's X.509 certificate via DNS. If a IKE_PEER_CERTFILE is specified, do not include this parameter. For more information about the encryption algorithms available for IPsec, refer to the setkey man page. For more information about racoon , refer to the racoon and racoon.conf man pages. | [
"TYPE=IPsec ONBOOT=yes IKE_METHOD=PSK SRCNET=192.168.1.0/24 DSTNET=192.168.2.0/24 DST= X.X.X.X"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-networkscripts-interfaces-ipsec |
2.7.3. Host-To-Host VPN Using Libreswan | 2.7.3. Host-To-Host VPN Using Libreswan To configure Libreswan to create a host-to-host IPsec VPN, between two hosts referred to as " left " and " right " , and enter the following commands as root on both of the hosts ( " left " and " right " ) to create new raw RSA key pairs: This generates an RSA key pair for the host. The process of generating RSA keys can take many minutes, especially on virtual machines with low entropy. To view the public key, issue the following command as root on either of the hosts. For example, to view the public key on the " left " host, run: You have to add this key to the configuration file as explained in the following paragraphs. The secret part is stored in /etc/ipsec.d/*.db files, also called the " NSS database " . To make a configuration file for this host-to-host tunnel, the lines leftrsasigkey= and rightrsasigkey= from above, are added to a custom configuration file placed in the /etc/ipsec.d/ directory. Using an editor running as root , create a file with a corresponding name in the following format: /etc/ipsec.d/myvpn.conf Edit the file as follows: You can use the identical configuration file on both left and right hosts. They auto-detect if they are " left " or " right " . If one of the hosts is a mobile host, which implies the IP address is not known in advance, then on the mobile host use %defaultroute as its IP address. This picks up the dynamic IP address automatically. On the static host that accepts connections from incoming mobile hosts, specify the mobile host using %any for its IP address. Ensure the leftrsasigkey value is obtained from the " left " host and the rightrsasigkey value is obtained from the " right " host. Restart ipsec to ensure it reads the new configuration: To check the tunnel is succesfully established, and additionally see how much traffic has gone through the tunnel, enter the following command as root : Alternatively, if not using the auto=start option in the /etc/ipsec.d/*.conf file or if a tunnel is not succesfully established, use the following command as root to load the IPsec tunnel: To bring up the tunnel, issue the following command as root , on the left or the right side: 2.7.3.1. Verify Host-To-Host VPN Using Libreswan The IKE negotiation takes place on UDP port 500. IPsec packets show up as Encapsulated Security Payload (ESP) packets. When the VPN connection needs to pass through a NAT router, the ESP packets are encapsulated in UDP packets on port 4500. To verify that packets are being sent via the VPN tunnel, issue a command as root in the following format: Where interface is the interface known to carry the traffic. To end the capture with tcpdump , press Ctrl + C . Note The tcpdump commands interacts a little unexpectedly with IPsec . It only sees the outgoing encrypted packet, not the outgoing plaintext packet. It does see the encrypted incoming packet, as well as the decrypted incoming packet. If possible, run tcpdump on a router between the two machines and not on one of the endpoints itself. | [
"~]# ipsec newhostkey --configdir /etc/ipsec.d --output /etc/ipsec.d/myvpn.secrets Generated RSA key pair using the NSS database",
"~]# ipsec showhostkey --left ipsec showhostkey loading secrets from \"/etc/ipsec.secrets\" ipsec showhostkey loading secrets from \"/etc/ipsec.d/myvpn.secrets\" ipsec showhostkey loaded private key for keyid: PPK_RSA:AQOjAKLlL # rsakey AQOjAKLlL leftrsasigkey=0sAQOjAKLlL4a7YBv [...]",
"conn myvpn [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig # load and initiate automatically auto=start",
"~]# service ipsec --full-restart",
"~]# ipsec whack --trafficstatus 006 #2: \"myvpn\", type=ESP, add_time=1234567890, inBytes=336, outBytes=336, id='@east'",
"~]# ipsec auto --add myvpn",
"~]# ipsec auto --up myvpn",
"~]# tcpdump -n -i interface esp or udp port 500 or udp port 4500 00:32:32.632165 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1a), length 132 00:32:32.632592 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1a), length 132 00:32:32.632592 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 7, length 64 00:32:33.632221 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1b), length 132 00:32:33.632731 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1b), length 132 00:32:33.632731 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 8, length 64 00:32:34.632183 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1c), length 132 00:32:34.632607 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1c), length 132 00:32:34.632607 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 9, length 64 00:32:35.632233 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1d), length 132 00:32:35.632685 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1d), length 132 00:32:35.632685 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 10, length 64"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-host-to-host_vpn_using_libreswan |
Appendix C. The X Window System | Appendix C. The X Window System While the heart of Red Hat Enterprise Linux is the kernel, for many users, the face of the operating system is the graphical environment provided by the X Window System , also called X . Other windowing environments have existed in the UNIX world, including some that predate the release of the X Window System in June 1984. Nonetheless, X has been the default graphical environment for most UNIX-like operating systems, including Red Hat Enterprise Linux, for many years. The graphical environment for Red Hat Enterprise Linux is supplied by the X.Org Foundation , an open source organization created to manage development and strategy for the X Window System and related technologies. X.Org is a large-scale, rapid-developing project with hundreds of developers around the world. It features a wide degree of support for a variety of hardware devices and architectures, and runs on myriad operating systems and platforms. The X Window System uses a client-server architecture. Its main purpose is to provide network transparent window system, which runs on a wide range of computing and graphics machines. The X server (the Xorg binary) listens for connections from X client applications via a network or local loopback interface. The server communicates with the hardware, such as the video card, monitor, keyboard, and mouse. X client applications exist in the user space, creating a graphical user interface ( GUI ) for the user and passing user requests to the X server. C.1. The X Server Red Hat Enterprise Linux 6 uses X server version, which includes several video drivers, EXA, and platform support enhancements over the release, among others. In addition, this release includes several automatic configuration features for the X server, as well as the generic input driver, evdev , that supports all input devices that the kernel knows about, including most mice and keyboards. X11R7.1 was the first release to take specific advantage of making the X Window System modular. This release split X into logically distinct modules, which make it easier for open source developers to contribute code to the system. In the current release, all libraries, headers, and binaries live under the /usr/ directory. The /etc/X11/ directory contains configuration files for X client and server applications. This includes configuration files for the X server itself, the X display managers, and many other base components. The configuration file for the newer Fontconfig-based font architecture is still /etc/fonts/fonts.conf . For more information on configuring and adding fonts, see Section C.4, "Fonts" . Because the X server performs advanced tasks on a wide array of hardware, it requires detailed information about the hardware it works on. The X server is able to automatically detect most of the hardware that it runs on and configure itself accordingly. Alternatively, hardware can be manually specified in configuration files. The Red Hat Enterprise Linux system installer, Anaconda, installs and configures X automatically, unless the X packages are not selected for installation. If there are any changes to the monitor, video card or other devices managed by the X server, most of the time, X detects and reconfigures these changes automatically. In rare cases, X must be reconfigured manually. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-The_X_Window_System |
Chapter 2. Eclipse Temurin 17.0.8.1 release notes | Chapter 2. Eclipse Temurin 17.0.8.1 release notes Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. Review the following release note for an overview of the changes from the Eclipse Temurin 17.0.8.1 patch release. Note For all the other changes and security fixes, see OpenJDK 17.0.8.1 Released . Fixed Invalid CEN header error on valid .zip files OpenJDK 17.0.8 introduced additional validation checks on the ZIP64 fields of .zip files (JDK-8302483). However, these additional checks caused validation failures on some valid .zip files with the following error message: Invalid CEN header (invalid zip64 extra data field size) . To fix this issue, OpenJDK 17.0.8.1 supports zero-length headers and the additional padding that some ZIP64 creation tools produce. From OpenJDK 17.0.8 onward, you can disable these checks by setting the jdk.util.zip.disableZip64ExtraFieldValidation system property to true . See JDK-8313765 (JDK Bug System) Increased default value of jdk.jar.maxSignatureFileSize system property OpenJDK 17.0.8 introduced a jdk.jar.maxSignatureFileSize system property for configuring the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file ( JDK-8300596 ). By default, the jdk.jar.maxSignatureFileSize property was set to 8000000 bytes (8 MB), which was too small for some JAR files. OpenJDK 17.0.8.1 increases the default value of the jdk.jar.maxSignatureFileSize property to 16000000 bytes (16 MB). See JDK-8313216 (JDK Bug System) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.8/openjdk-temurin-17-0-8-1-release-notes_openjdk |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_vmware_vsphere/making-open-source-more-inclusive |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.