title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 2. New features and enhancements | Chapter 2. New features and enhancements Red Hat JBoss Web Server 6.0 Service Pack 1 includes the following new features and enhancements. 2.1. Location changes for tomcat-websocket-chat quick start application This release includes the following changes to the location of the example tomcat-websocket-chat quick start application that you can use with JWS for OpenShift: The URL for the quick start has changed from https://github.com/jboss-openshift/openshift-quickstarts to https://github.com/web-servers/tomcat-websocket-chat-quickstart . The git repository for the quick start has changed from https://github.com/jboss-openshift/openshift-quickstarts.git to https://github.com/web-servers/tomcat-websocket-chat-quickstart.git . The directory path for the quick start has changed from openshift-quickstarts/tomcat-websocket-chat to tomcat-websocket-chat-quickstart/tomcat-websocket-chat . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_1_release_notes/new_features_and_enhancements |
9.9. Securing Server Connections | 9.9. Securing Server Connections After designing the authentication scheme for identified users and the access control scheme for protecting information in the directory, the step is to design a way to protect the integrity of the information as it passes between servers and client applications. For both server to client connections and server to server connections, the Directory Server supports a variety of secure connection types: Transport Layer Security (TLS) . To provide secure communications over the network, the Directory Server can use LDAP over the Transport Layer Security (TLS). TLS can be used in conjunction with encryption algorithms from RSA. The encryption method selected for a particular connection is the result of a negotiation between the client application and Directory Server. Start TLS . Directory Server also supports Start TLS, a method of initiating a Transport Layer Security (TLS) connection over a regular, unencrypted LDAP port. Simple Authentication and Security Layer (SASL) . SASL is a security framework, meaning that it sets up a system that allows different mechanisms to authenticate a user to the server, depending on what mechanism is enabled in both client and server applications. It can also establish an encrypted session between the client and a server. In Directory Server, SASL is used with GSS-API to enable Kerberos logins and can be used for almost all server to server connections, including replication, chaining, and pass-through authentication. (SASL cannot be used with Windows Sync.) Secure connections are recommended for any operations which handle sensitive information, like replication, and are required for some operations, like Windows password synchronization. Directory Server can support TLS connections, SASL, and non-secure connections simultaneously. Both SASL authentication and TLS connections can be configured at the same time. For example, the Directory Server instance can be configured to require TLS connections to the server and also support SASL authentication for replication connections. This means it is not necessary to choose whether to use TLS or SASL in a network environment; you can use both. It is also possible to set a minimum level of security for connections to the server. The security strength factor measures, in key strength, how strong a secure connection is. An ACI can be set that requires certain operations (like password changes) only occur if the connection is of a certain strength or higher. It is also possible to set a minimum SSF, which can essentially disable standard connections and requires TLS, Start TLS, or SASL for every connection. The Directory Server supports TLS and SASL simultaneously, and the server calculates the SSF of all available connection types and selects the strongest. For more information about using TLS, Start TLS, and SASL, check out the Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_a_Secure_Directory-Securing_Connections_with_TLS_and_Start_TLS |
Provisioning APIs | Provisioning APIs OpenShift Container Platform 4.17 Reference guide for provisioning APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/provisioning_apis/index |
Chapter 6. Configure Host Names | Chapter 6. Configure Host Names 6.1. Understanding Host Names There are three classes of hostname : static, pretty, and transient. The " static " host name is the traditional hostname , which can be chosen by the user, and is stored in the /etc/hostname file. The " transient " hostname is a dynamic host name maintained by the kernel. It is initialized to the static host name by default, whose value defaults to " localhost " . It can be changed by DHCP or mDNS at runtime. The " pretty " hostname is a free-form UTF8 host name for presentation to the user. Note A host name can be a free-form string up to 64 characters in length. However, Red Hat recommends that both static and transient names match the fully-qualified domain name ( FQDN ) used for the machine in DNS , such as host.example.com . It is also recommended that the static and transient names consists only of 7 bit ASCII lower-case characters, no spaces or dots, and limits itself to the format allowed for DNS domain name labels, even though this is not a strict requirement. Older specifications do not permit the underscore, and so their use is not recommended. The hostnamectl tool will enforce the following: Static and transient host names to consist of a-z , A-Z , 0-9 , " - " , " _ " and " . " only, to not begin or end in a dot, and to not have two dots immediately following each other. The size limit of 64 characters is enforced. 6.1.1. Recommended Naming Practices The Internet Corporation for Assigned Names and Numbers (ICANN) sometimes adds previously unregistered Top-Level Domains (such as .yourcompany ) to the public register. Therefore, Red Hat strongly recommends that you do not use a domain name that is not delegated to you, even on a private network, as this can result in a domain name that resolves differently depending on network configuration. As a result, network resources can become unavailable. Using domain names that are not delegated to you also makes DNSSEC more difficult to deploy and maintain, as domain name collisions require manual configuration to enable DNSSEC validation. See the ICANN FAQ on domain name collision for more information on this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_host_names |
2.3. LVM Logical Volumes | 2.3. LVM Logical Volumes In LVM, a volume group is divided up into logical volumes. The following sections describe the different types of logical volumes. 2.3.1. Linear Volumes A linear volume aggregates space from one or more physical volumes into one logical volume. For example, if you have two 60GB disks, you can create a 120GB logical volume. The physical storage is concatenated. Creating a linear volume assigns a range of physical extents to an area of a logical volume in order. For example, as shown in Figure 2.2, "Extent Mapping" logical extents 1 to 99 could map to one physical volume and logical extents 100 to 198 could map to a second physical volume. From the point of view of the application, there is one device that is 198 extents in size. Figure 2.2. Extent Mapping The physical volumes that make up a logical volume do not have to be the same size. Figure 2.3, "Linear Volume with Unequal Physical Volumes" shows volume group VG1 with a physical extent size of 4MB. This volume group includes 2 physical volumes named PV1 and PV2 . The physical volumes are divided into 4MB units, since that is the extent size. In this example, PV1 is 200 extents in size (800MB) and PV2 is 100 extents in size (400MB). You can create a linear volume any size between 1 and 300 extents (4MB to 1200MB). In this example, the linear volume named LV1 is 300 extents in size. Figure 2.3. Linear Volume with Unequal Physical Volumes You can configure more than one linear logical volume of whatever size you require from the pool of physical extents. Figure 2.4, "Multiple Logical Volumes" shows the same volume group as in Figure 2.3, "Linear Volume with Unequal Physical Volumes" , but in this case two logical volumes have been carved out of the volume group: LV1 , which is 250 extents in size (1000MB) and LV2 which is 50 extents in size (200MB). Figure 2.4. Multiple Logical Volumes 2.3.2. Striped Logical Volumes When you write data to an LVM logical volume, the file system lays the data out across the underlying physical volumes. You can control the way the data is written to the physical volumes by creating a striped logical volume. For large sequential reads and writes, this can improve the efficiency of the data I/O. Striping enhances performance by writing data to a predetermined number of physical volumes in round-robin fashion. With striping, I/O can be done in parallel. In some situations, this can result in near-linear performance gain for each additional physical volume in the stripe. The following illustration shows data being striped across three physical volumes. In this figure: the first stripe of data is written to the first physical volume the second stripe of data is written to the second physical volume the third stripe of data is written to the third physical volume the fourth stripe of data is written to the first physical volume In a striped logical volume, the size of the stripe cannot exceed the size of an extent. Figure 2.5. Striping Data Across Three PVs Striped logical volumes can be extended by concatenating another set of devices onto the end of the first set. In order to extend a striped logical volume, however, there must be enough free space on the set of underlying physical volumes that make up the volume group to support the stripe. For example, if you have a two-way stripe that uses up an entire volume group, adding a single physical volume to the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For more information on extending a striped volume, see Section 4.4.17, "Extending a Striped Volume" . 2.3.3. RAID Logical Volumes LVM supports RAID0/1/4/5/6/10. An LVM RAID volume has the following characteristics: RAID logical volumes created and managed by means of LVM leverage the MD kernel drivers. RAID1 images can be temporarily split from the array and merged back into the array later. LVM RAID volumes support snapshots. For information on creating RAID logical volumes, see Section 4.4.3, "RAID Logical Volumes" . Note RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. 2.3.4. Thinly-Provisioned Logical Volumes (Thin Volumes) Logical volumes can be thinly provisioned. This allows you to create logical volumes that are larger than the available extents. Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application actually writes to the logical volume. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Note Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin volumes must be exclusively activated on only one cluster node. By using thin provisioning, a storage administrator can overcommit the physical storage, often avoiding the need to purchase additional storage. For example, if ten users each request a 100GB file system for their application, the storage administrator can create what appears to be a 100GB file system for each user but which is backed by less actual storage that is used only when needed. Note When using thin provisioning, it is important that the storage administrator monitor the storage pool and add more capacity if it starts to become full. To make sure that all available space can be used, LVM supports data discard. This allows for re-use of the space that was formerly used by a discarded file or other block range. For information on creating thin volumes, see Section 4.4.5, "Creating Thinly-Provisioned Logical Volumes" . Thin volumes provide support for a new implementation of copy-on-write (COW) snapshot logical volumes, which allow many virtual devices to share the same data in the thin pool. For information on thin snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . 2.3.5. Snapshot Volumes The LVM snapshot feature provides the ability to create virtual images of a device at a particular instant without causing a service interruption. When a change is made to the original device (the origin) after a snapshot is taken, the snapshot feature makes a copy of the changed data area as it was prior to the change so that it can reconstruct the state of the device. Note LVM supports thinly-provisioned snapshots. For information on thinly provisioned snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . Note LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group. Because a snapshot copies only the data areas that change after the snapshot is created, the snapshot feature requires a minimal amount of storage. For example, with a rarely updated origin, 3-5 % of the origin's capacity is sufficient to maintain the snapshot. Note Snapshot copies of a file system are virtual copies, not an actual media backup for a file system. Snapshots do not provide a substitute for a backup procedure. The size of the snapshot governs the amount of space set aside for storing the changes to the origin volume. For example, if you made a snapshot and then completely overwrote the origin the snapshot would have to be at least as big as the origin volume to hold the changes. You need to dimension a snapshot according to the expected level of change. So for example a short-lived snapshot of a read-mostly volume, such as /usr , would need less space than a long-lived snapshot of a volume that sees a greater number of writes, such as /home . If a snapshot runs full, the snapshot becomes invalid, since it can no longer track changes on the origin volume. You should regularly monitor the size of the snapshot. Snapshots are fully resizable, however, so if you have the storage capacity you can increase the size of the snapshot volume to prevent it from getting dropped. Conversely, if you find that the snapshot volume is larger than you need, you can reduce the size of the volume to free up space that is needed by other logical volumes. When you create a snapshot file system, full read and write access to the origin stays possible. If a chunk on a snapshot is changed, that chunk is marked and never gets copied from the original volume. There are several uses for the snapshot feature: Most typically, a snapshot is taken when you need to perform a backup on a logical volume without halting the live system that is continuously updating the data. You can execute the fsck command on a snapshot file system to check the file system integrity and determine whether the original file system requires file system repair. Because the snapshot is read/write, you can test applications against production data by taking a snapshot and running tests against the snapshot, leaving the real data untouched. You can create LVM volumes for use with Red Hat Virtualization. LVM snapshots can be used to create snapshots of virtual guest images. These snapshots can provide a convenient way to modify existing guests or create new guests with minimal additional storage. For information on creating LVM-based storage pools with Red Hat Virtualization, see the Virtualization Administration Guide . For information on creating snapshot volumes, see Section 4.4.6, "Creating Snapshot Volumes" . You can use the --merge option of the lvconvert command to merge a snapshot into its origin volume. One use for this feature is to perform system rollback if you have lost data or files or otherwise need to restore your system to a state. After you merge the snapshot volume, the resulting logical volume will have the origin volume's name, minor number, and UUID and the merged snapshot is removed. For information on using this option, see Section 4.4.9, "Merging Snapshot Volumes" . 2.3.6. Thinly-Provisioned Snapshot Volumes Red Hat Enterprise Linux provides support for thinly-provisioned snapshot volumes. Thin snapshot volumes allow many virtual devices to be stored on the same data volume. This simplifies administration and allows for the sharing of data between snapshot volumes. As for all LVM snapshot volumes, as well as all thin volumes, thin snapshot volumes are not supported across the nodes in a cluster. The snapshot volume must be exclusively activated on only one cluster node. Thin snapshot volumes provide the following benefits: A thin snapshot volume can reduce disk usage when there are multiple snapshots of the same origin volume. If there are multiple snapshots of the same origin, then a write to the origin will cause one COW operation to preserve the data. Increasing the number of snapshots of the origin should yield no major slowdown. Thin snapshot volumes can be used as a logical volume origin for another snapshot. This allows for an arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots...). A snapshot of a thin logical volume also creates a thin logical volume. This consumes no data space until a COW operation is required, or until the snapshot itself is written. A thin snapshot volume does not need to be activated with its origin, so a user may have only the origin active while there are many inactive snapshot volumes of the origin. When you delete the origin of a thinly-provisioned snapshot volume, each snapshot of that origin volume becomes an independent thinly-provisioned volume. This means that instead of merging a snapshot with its origin volume, you may choose to delete the origin volume and then create a new thinly-provisioned snapshot using that independent volume as the origin volume for the new snapshot. Although there are many advantages to using thin snapshot volumes, there are some use cases for which the older LVM snapshot volume feature may be more appropriate to your needs: You cannot change the chunk size of a thin pool. If the thin pool has a large chunk size (for example, 1MB) and you require a short-living snapshot for which a chunk size that large is not efficient, you may elect to use the older snapshot feature. You cannot limit the size of a thin snapshot volume; the snapshot will use all of the space in the thin pool, if necessary. This may not be appropriate for your needs. In general, you should consider the specific requirements of your site when deciding which snapshot format to use. Note When using thin provisioning, it is important that the storage administrator monitor the storage pool and add more capacity if it starts to become full. For information on configuring and displaying information on thinly-provisioned snapshot volumes, see Section 4.4.7, "Creating Thinly-Provisioned Snapshot Volumes" . 2.3.7. Cache Volumes As of the Red Hat Enterprise Linux 7.1 release, LVM supports the use of fast block devices (such as SSD drives) as write-back or write-through caches for larger slower block devices. Users can create cache logical volumes to improve the performance of their existing logical volumes or create new cache logical volumes composed of a small and fast device coupled with a large and slow device. For information on creating LVM cache volumes, see Section 4.4.8, "Creating LVM Cache Logical Volumes" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lv_overview |
3.6. Do You Have Enough Disk Space? | 3.6. Do You Have Enough Disk Space? Nearly every modern-day operating system (OS) uses disk partitions , and Red Hat Enterprise Linux is no exception. When you install Red Hat Enterprise Linux, you may have to work with disk partitions. If you have not worked with disk partitions before (or need a quick review of the basic concepts), refer to Appendix A, An Introduction to Disk Partitions before proceeding. The disk space used by Red Hat Enterprise Linux must be separate from the disk space used by other OSes you may have installed on your system, such as Windows, OS/2, or even a different version of Linux. For x86, AMD64, and Intel 64 systems, at least two partitions ( / and swap ) must be dedicated to Red Hat Enterprise Linux. Before you start the installation process, you must have enough unpartitioned [1] disk space for the installation of Red Hat Enterprise Linux, or have one or more partitions that may be deleted, thereby freeing up enough disk space to install Red Hat Enterprise Linux. To gain a better sense of how much space you really need, refer to the recommended partitioning sizes discussed in Section 9.15.5, "Recommended Partitioning Scheme" . If you are not sure that you meet these conditions, or if you want to know how to create free disk space for your Red Hat Enterprise Linux installation, refer to Appendix A, An Introduction to Disk Partitions . [1] Unpartitioned disk space means that available disk space on the hard drives you are installing to has not been divided into sections for data. When you partition a disk, each partition behaves like a separate disk drive. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Disk_Space-x86 |
Chapter 4. Configuration information for Red Hat Quay | Chapter 4. Configuration information for Red Hat Quay Checking a configuration YAML can help identify and resolve various issues related to the configuration of Red Hat Quay. Checking the configuration YAML can help you address the following issues: Incorrect Configuration Parameters : If the database is not functioning as expected or is experiencing performance issues, your configuration parameters could be at fault. By checking the configuration YAML, administrators can ensure that all the required parameters are set correctly and match the intended settings for the database. Resource Limitations : The configuration YAML might specify resource limits for the database, such as memory and CPU limits. If the database is running into resource constraints or experiencing contention with other services, adjusting these limits can help optimize resource allocation and improve overall performance. Connectivity Issues : Incorrect network settings in the configuration YAML can lead to connectivity problems between the application and the database. Ensuring that the correct network configurations are in place can resolve issues related to connectivity and communication. Data Storage and Paths : The configuration YAML may include paths for storing data and logs. If the paths are misconfigured or inaccessible, the database may encounter errors while reading or writing data, leading to operational issues. Authentication and Security : The configuration YAML may contain authentication settings, including usernames, passwords, and access controls. Verifying these settings is crucial for maintaining the security of the database and ensuring only authorized users have access. Plugin and Extension Settings : Some databases support extensions or plugins that enhance functionality. Issues may arise if these plugins are misconfigured or not loaded correctly. Checking the configuration YAML can help identify any problems with plugin settings. Replication and High Availability Settings : In clustered or replicated database setups, the configuration YAML may define replication settings and high availability configurations. Incorrect settings can lead to data inconsistency and system instability. Backup and Recovery Options : The configuration YAML might include backup and recovery options, specifying how data backups are performed and how data can be recovered in case of failures. Validating these settings can ensure data safety and successful recovery processes. By checking your configuration YAML, Red Hat Quay administrators can detect and resolve these issues before they cause significant disruptions to the application or service relying on the database. 4.1. Obtaining configuration information for Red Hat Quay Configuration information can be obtained for all types of Red Hat Quay deployments, include standalone, Operator, and geo-replication deployments. Obtaining configuration information can help you resolve issues with authentication and authorization, your database, object storage, and repository mirroring. After you have obtained the necessary configuration information, you can update your config.yaml file, search the Red Hat Knowledgebase for a solution, or file a support ticket with the Red Hat Support team. Procedure To obtain configuration information on Red Hat Quay Operator deployments, you can use oc exec , oc cp , or oc rsync . To use the oc exec command, enter the following command: USD oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml This command returns your config.yaml file directly to your terminal. To use the oc copy command, enter the following commands: USD oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml To display this information in your terminal, enter the following command: USD cat /tmp/config.yaml To use the oc rsync command, enter the following commands: oc rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml Example output DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us To obtain configuration information on standalone Red Hat Quay deployments, you can use podman cp or podman exec . To use the podman copy command, enter the following commands: USD podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/ To display this information in your terminal, enter the following command: USD cat /tmp/local_directory/config.yaml To use podman exec , enter the following commands: USD podman exec -it <quay_container_id> cat /conf/stack/config.yaml Example output BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} --- 4.2. Obtaining database configuration information You can obtain configuration information about your database by using the following procedure. Warning Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist. Procedure If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command: USD oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf If you are using a standalone deployment of Red Hat Quay, enter the following command: USD podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf | [
"oc exec -it <quay_pod_name> -- cat /conf/stack/config.yaml",
"oc cp <quay_pod_name>:/conf/stack/config.yaml /tmp/config.yaml",
"cat /tmp/config.yaml",
"rsync <quay_pod_name>:/conf/stack/ /tmp/local_directory/",
"cat /tmp/local_directory/config.yaml",
"DISTRIBUTED_STORAGE_CONFIG: local_us: - RHOCSStorage - access_key: redacted bucket_name: lht-quay-datastore-68fff7b8-1b5e-46aa-8110-c4b7ead781f5 hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: 443 secret_key: redacted storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us",
"podman cp <quay_container_id>:/conf/stack/config.yaml /tmp/local_directory/",
"cat /tmp/local_directory/config.yaml",
"podman exec -it <quay_container_id> cat /conf/stack/config.yaml",
"BROWSER_API_CALLS_XHR_ONLY: false ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.oci.image.layer.v1.tar+zstd application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar AUTHENTICATION_TYPE: Database AVATAR_KIND: local BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: 05ee6382-24a6-43c0-b30f-849c8a0f7260 DB_CONNECTION_ARGS: {} ---",
"oc exec -it <database_pod> -- cat /var/lib/pgsql/data/userdata/postgresql.conf",
"podman exec -it <database_container> cat /var/lib/pgsql/data/userdata/postgresql.conf"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/troubleshooting_red_hat_quay/obtaining-quay-config-information |
Chapter 12. Updating managed clusters with the Topology Aware Lifecycle Manager | Chapter 12. Updating managed clusters with the Topology Aware Lifecycle Manager You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of multiple clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. Important Using PolicyGenerator resources with GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 12.1. About the Topology Aware Lifecycle Manager configuration The Topology Aware Lifecycle Manager (TALM) manages the deployment of Red Hat Advanced Cluster Management (RHACM) policies for one or more OpenShift Container Platform clusters. Using TALM in a large network of clusters allows the phased rollout of policies to the clusters in limited batches. This helps to minimize possible service disruptions when updating. With TALM, you can control the following actions: The timing of the update The number of RHACM-managed clusters The subset of managed clusters to apply the policies to The update order of the clusters The set of policies remediated to the cluster The order of policies remediated to the cluster The assignment of a canary cluster For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) offers pre-caching images for clusters with limited bandwidth. TALM supports the orchestration of the OpenShift Container Platform y-stream and z-stream updates, and day-two operations on y-streams and z-streams. 12.2. About managed policies used with Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) uses RHACM policies for cluster updates. TALM can be used to manage the rollout of any policy CR where the remediationAction field is set to inform . Supported use cases include the following: Manual user creation of policy CRs Automatically generated policies from the PolicyGenerator or PolicyGentemplate custom resource definition (CRD) For policies that update an Operator subscription with manual approval, TALM provides additional functionality that approves the installation of the updated Operator. For more information about managed policies, see Policy Overview in the RHACM documentation. Additional resources About the PolicyGenerator CRD 12.3. Installing the Topology Aware Lifecycle Manager by using the web console You can use the OpenShift Container Platform web console to install the Topology Aware Lifecycle Manager. Prerequisites Install the latest version of the RHACM Operator. TALM requires RHACM 2.9 or later. Set up a hub cluster with a disconnected registry. Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Topology Aware Lifecycle Manager from the list of available Operators, and then click Install . Keep the default selection of Installation mode ["All namespaces on the cluster (default)"] and Installed Namespace ("openshift-operators") to ensure that the Operator is installed properly. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the All Namespaces namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any containers in the cluster-group-upgrades-controller-manager pod that are reporting issues. 12.4. Installing the Topology Aware Lifecycle Manager by using the CLI You can use the OpenShift CLI ( oc ) to install the Topology Aware Lifecycle Manager (TALM). Prerequisites Install the OpenShift CLI ( oc ). Install the latest version of the RHACM Operator. TALM requires RHACM 2.9 or later. Set up a hub cluster with disconnected registry. Log in as a user with cluster-admin privileges. Procedure Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, talm-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: "stable" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f talm-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-operators Example output NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.18.x Topology Aware Lifecycle Manager 4.18.x Succeeded Verify that the TALM is up and running: USD oc get deploy -n openshift-operators Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s 12.5. About the ClusterGroupUpgrade CR The Topology Aware Lifecycle Manager (TALM) builds the remediation plan from the ClusterGroupUpgrade CR for a group of clusters. You can define the following specifications in a ClusterGroupUpgrade CR: Clusters in the group Blocking ClusterGroupUpgrade CRs Applicable list of managed policies Number of concurrent updates Applicable canary updates Actions to perform before and after the update Update timing You can control the start time of an update using the enable field in the ClusterGroupUpgrade CR. For example, if you have a scheduled maintenance window of four hours, you can prepare a ClusterGroupUpgrade CR with the enable field set to false . You can set the timeout by configuring the spec.remediationStrategy.timeout setting as follows: spec remediationStrategy: maxConcurrency: 1 timeout: 240 You can use the batchTimeoutAction to determine what happens if an update fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or abort to stop policy remediation for all clusters. Once the timeout elapses, TALM removes all enforce policies to ensure that no further updates are made to clusters. To apply the changes, you set the enabled field to true . For more information see the "Applying update policies to managed clusters" section. As TALM works through remediation of the policies to the specified clusters, the ClusterGroupUpgrade CR can report true or false statuses for a number of conditions. Note After TALM completes a cluster update, the cluster does not update again under the control of the same ClusterGroupUpgrade CR. You must create a new ClusterGroupUpgrade CR in the following cases: When you need to update the cluster again When the cluster changes to non-compliant with the inform policy after being updated 12.5.1. Selecting clusters TALM builds a remediation plan and selects clusters based on the following fields: The clusterLabelSelector field specifies the labels of the clusters that you want to update. This consists of a list of the standard label selectors from k8s.io/apimachinery/pkg/apis/meta/v1 . Each selector in the list uses either label value pairs or label expressions. Matches from each selector are added to the final list of clusters along with the matches from the clusterSelector field and the cluster field. The clusters field specifies a list of clusters to update. The canaries field specifies the clusters for canary updates. The maxConcurrency field specifies the number of clusters to update in a batch. The actions field specifies beforeEnable actions that TALM takes as it begins the update process, and afterCompletion actions that TALM takes as it completes policy remediation for each cluster. You can use the clusters , clusterLabelSelector , and clusterSelector fields together to create a combined list of clusters. The remediation plan starts with the clusters listed in the canaries field. Each canary cluster forms a single-cluster batch. Sample ClusterGroupUpgrade CR with the enabled field set to false apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: "" deleteClusterLabels: upgrade-running: "" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: "" clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: 1 Specifies the action that TALM takes when it completes policy remediation for each cluster. 2 Specifies the action that TALM takes as it begins the update process. 3 Defines the list of clusters to update. 4 The enable field is set to false . 5 Lists the user-defined set of policies to remediate. 6 Defines the specifics of the cluster updates. 7 Defines the clusters for canary updates. 8 Defines the maximum number of concurrent updates in a batch. The number of remediation batches is the number of canary clusters, plus the number of clusters, except the canary clusters, divided by the maxConcurrency value. The clusters that are already compliant with all the managed policies are excluded from the remediation plan. 9 Displays the parameters for selecting clusters. 10 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . 11 Displays information about the status of the updates. 12 The ClustersSelected condition shows that all selected clusters are valid. 13 The Validated condition shows that all selected clusters have been validated. Note Any failures during the update of a canary cluster stops the update process. When the remediation plan is successfully created, you can you set the enable field to true and TALM starts to update the non-compliant clusters with the specified managed policies. Note You can only make changes to the spec fields if the enable field of the ClusterGroupUpgrade CR is set to false . 12.5.2. Validating TALM checks that all specified managed policies are available and correct, and uses the Validated condition to report the status and reasons as follows: true Validation is completed. false Policies are missing or invalid, or an invalid platform image has been specified. 12.5.3. Pre-caching Clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. On single-node OpenShift clusters, you can use pre-caching to avoid this. The container image pre-caching starts when you create a ClusterGroupUpgrade CR with the preCaching field set to true . TALM compares the available disk space with the estimated OpenShift Container Platform image size to ensure that there is enough space. If a cluster has insufficient space, TALM cancels pre-caching for that cluster and does not remediate policies on it. TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. For more information see the "Using the container image pre-cache feature" section. 12.5.4. Updating clusters TALM enforces the policies following the remediation plan. Enforcing the policies for subsequent batches starts immediately after all the clusters of the current batch are compliant with all the managed policies. If the batch times out, TALM moves on to the batch. The timeout value of a batch is the spec.timeout field divided by the number of batches in the remediation plan. TALM uses the Progressing condition to report the status and reasons as follows: true TALM is remediating non-compliant policies. false The update is not in progress. Possible reasons for this are: All clusters are compliant with all the managed policies. The update timed out as policy remediation took too long. Blocking CRs are missing from the system or have not yet completed. The ClusterGroupUpgrade CR is not enabled. Note The managed policies apply in the order that they are listed in the managedPolicies field in the ClusterGroupUpgrade CR. One managed policy is applied to the specified clusters at a time. When a cluster complies with the current policy, the managed policy is applied to it. Sample ClusterGroupUpgrade CR in the Progressing state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 1 The Progressing fields show that TALM is in the process of remediating policies. 12.5.5. Update status TALM uses the Succeeded condition to report the status and reasons as follows: true All clusters are compliant with the specified managed policies. false Policy remediation failed as there were no clusters available for remediation, or because policy remediation took too long for one of the following reasons: The current batch contains canary updates and the cluster in the batch does not comply with all the managed policies within the batch timeout. Clusters did not comply with the managed policies within the timeout value specified in the remediationStrategy field. Sample ClusterGroupUpgrade CR in the Succeeded state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: "True" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: "True" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: "False" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: "True" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z' 2 In the Progressing fields, the status is false as the update has completed; clusters are compliant with all the managed policies. 3 The Succeeded fields show that the validations completed successfully. 1 The status field includes a list of clusters and their respective statuses. The status of a cluster can be complete or timedout . Sample ClusterGroupUpgrade CR in the timedout state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z' 1 If a cluster's state is timedout , the currentPolicy field shows the name of the policy and the policy status. 2 The status for succeeded is false and the message indicates that policy remediation took too long. 12.5.6. Blocking ClusterGroupUpgrade CRs You can create multiple ClusterGroupUpgrade CRs and control their order of application. For example, if you create ClusterGroupUpgrade CR C that blocks the start of ClusterGroupUpgrade CR A, then ClusterGroupUpgrade CR A cannot start until the status of ClusterGroupUpgrade CR C becomes UpgradeComplete . One ClusterGroupUpgrade CR can have multiple blocking CRs. In this case, all the blocking CRs must complete before the upgrade for the current CR can start. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the content of the ClusterGroupUpgrade CRs in the cgu-a.yaml , cgu-b.yaml , and cgu-c.yaml files. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 1 Defines the blocking CRs. The cgu-a update cannot start until cgu-c is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 The cgu-b update cannot start until cgu-a is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {} 1 The cgu-c update does not have any blocking CRs. TALM starts the cgu-c update when the enable field is set to true . Create the ClusterGroupUpgrade CRs by running the following command for each relevant CR: USD oc apply -f <name>.yaml Start the update process by running the following command for each relevant CR: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> \ --type merge -p '{"spec":{"enable":true}}' The following examples show ClusterGroupUpgrade CRs where the enable field is set to true : Example for cgu-a with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: "False" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {} 1 Shows the list of blocking CRs. Example for cgu-b with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: "False" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 Shows the list of blocking CRs. Example for cgu-c with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: "False" type: Ready managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0 1 The cgu-c update does not have any blocking CRs. 12.6. Update policies on managed clusters The Topology Aware Lifecycle Manager (TALM) remediates a set of inform policies for the clusters specified in the ClusterGroupUpgrade custom resource (CR). TALM remediates inform policies by controlling the remediationAction specification in a Policy CR through the bindingOverrides.remediationAction and subFilter specifications in the PlacementBinding CR. Each policy has its own corresponding RHACM placement rule and RHACM placement binding. One by one, TALM adds each cluster from the current batch to the placement rule that corresponds with the applicable managed policy. If a cluster is already compliant with a policy, TALM skips applying that policy on the compliant cluster. TALM then moves on to applying the policy to the non-compliant cluster. After TALM completes the updates in a batch, all clusters are removed from the placement rules associated with the policies. Then, the update of the batch starts. If a spoke cluster does not report any compliant state to RHACM, the managed policies on the hub cluster can be missing status information that TALM needs. TALM handles these cases in the following ways: If a policy's status.compliant field is missing, TALM ignores the policy and adds a log entry. Then, TALM continues looking at the policy's status.status field. If a policy's status.status is missing, TALM produces an error. If a cluster's compliance status is missing in the policy's status.status field, TALM considers that cluster to be non-compliant with that policy. The ClusterGroupUpgrade CR's batchTimeoutAction determines what happens if an upgrade fails for a cluster. You can specify continue to skip the failing cluster and continue to upgrade other clusters, or specify abort to stop the policy remediation for all clusters. Once the timeout elapses, TALM removes all the resources it created to ensure that no further updates are made to clusters. Example upgrade policy apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.18.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.18 desiredUpdate: version: 4.4.18.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.18.4 remediationAction: inform severity: low remediationAction: inform For more information about RHACM policies, see Policy overview . Additional resources About the PolicyGenerator CRD 12.6.1. Configuring Operator subscriptions for managed clusters that you install with TALM Topology Aware Lifecycle Manager (TALM) can only approve the install plan for an Operator if the Subscription custom resource (CR) of the Operator contains the status.state.AtLatestKnown field. Procedure Add the status.state.AtLatestKnown field to the Subscription CR of the Operator: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1 1 The status.state: AtLatestKnown field is used for the latest Operator version available from the Operator catalog. Note When a new version of the Operator is available in the registry, the associated policy becomes non-compliant. Apply the changed Subscription policy to your managed clusters with a ClusterGroupUpgrade CR. 12.6.2. Applying update policies to managed clusters You can update your managed clusters by applying your policies. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). TALM requires RHACM 2.9 or later. Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the contents of the ClusterGroupUpgrade CR in the cgu-1.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5 1 The name of the policies to apply. 2 The list of clusters to update. 3 The maxConcurrency field signifies the number of clusters updated at the same time. 4 The update timeout in minutes. 5 Controls what happens if a batch times out. Possible values are abort or continue . If unspecified, the default is continue . Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-1.yaml Check if the ClusterGroupUpgrade CR was created in the hub cluster by running the following command: USD oc get cgu --all-namespaces Example output NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled Check the status of the update by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Not enabled", 1 "reason": "NotEnabled", "status": "False", "type": "Progressing" } ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": {} } 1 The spec.enable field in the ClusterGroupUpgrade CR is set to false . Change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the update by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ 1 { "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "All selected clusters are valid", "reason": "ClusterSelectionCompleted", "status": "True", "type": "ClustersSelected" }, { "lastTransitionTime": "2022-02-25T15:33:07Z", "message": "Completed validation", "reason": "ValidationCompleted", "status": "True", "type": "Validated" }, { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "Remediating non-compliant policies", "reason": "InProgress", "status": "True", "type": "Progressing" } ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": { "currentBatch": 1, "currentBatchRemediationProgress": { "spoke1": { "policyIndex": 1, "state": "InProgress" }, "spoke2": { "policyIndex": 1, "state": "InProgress" } }, "currentBatchStartedAt": "2022-02-25T15:54:16Z", "startedAt": "2022-02-25T15:54:16Z" } } 1 Reflects the update progress of the current batch. Run this command again to receive updated information about the progress. Check the status of the policies by running the following command: oc get policies -A Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE spoke1 default.policy1-common-cluster-version-policy enforce Compliant 18m spoke1 default.policy2-common-nto-sub-policy enforce NonCompliant 18m spoke2 default.policy1-common-cluster-version-policy enforce Compliant 18m spoke2 default.policy2-common-nto-sub-policy enforce NonCompliant 18m spoke5 default.policy3-common-ptp-sub-policy inform NonCompliant 18m spoke5 default.policy4-common-sriov-sub-policy inform NonCompliant 18m spoke6 default.policy3-common-ptp-sub-policy inform NonCompliant 18m spoke6 default.policy4-common-sriov-sub-policy inform NonCompliant 18m default policy1-common-ptp-sub-policy inform Compliant 18m default policy2-common-sriov-sub-policy inform NonCompliant 18m default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m The spec.remediationAction value changes to enforce for the child policies applied to the clusters from the current batch. The spec.remedationAction value remains inform for the child policies in the rest of the clusters. After the batch is complete, the spec.remediationAction value changes back to inform for the enforced child policies. If the policies include Operator subscriptions, you can check the installation progress directly on the single-node cluster. Export the KUBECONFIG file of the single-node cluster you want to check the installation progress for by running the following command: USD export KUBECONFIG=<cluster_kubeconfig_absolute_path> Check all the subscriptions present on the single-node cluster and look for the one in the policy you are trying to install through the ClusterGroupUpgrade CR by running the following command: USD oc get subs -A | grep -i <subscription_name> Example output for cluster-logging policy NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable If one of the managed policies includes a ClusterVersion CR, check the status of platform updates in the current batch by running the following command against the spoke cluster: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.18.5 True True 43s Working towards 4.4.18.7: 71 of 735 done (9% complete) Check the Operator subscription by running the following command: USD oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath="{.status}" Check the install plans present on the single-node cluster that is associated with the desired subscription by running the following command: USD oc get installplan -n <subscription_namespace> Example output for cluster-logging Operator NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1 1 The install plans have their Approval field set to Manual and their Approved field changes from false to true after TALM approves the install plan. Note When TALM is remediating a policy containing a subscription, it automatically approves any install plans attached to that subscription. Where multiple install plans are needed to get the operator to the latest known version, TALM might approve multiple install plans, upgrading through one or more intermediate versions to get to the final version. Check if the cluster service version for the Operator of the policy that the ClusterGroupUpgrade is installing reached the Succeeded phase by running the following command: USD oc get csv -n <operator_namespace> Example output for OpenShift Logging Operator NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded 12.7. Using the container image pre-cache feature Single-node OpenShift clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. Note The time of the update is not set by TALM. You can apply the ClusterGroupUpgrade CR at the beginning of the update by manual application or by external automation. The container image pre-caching starts when the preCaching field is set to true in the ClusterGroupUpgrade CR. TALM uses the PrecacheSpecValid condition to report status information as follows: true The pre-caching spec is valid and consistent. false The pre-caching spec is incomplete. TALM uses the PrecachingSucceeded condition to report status information as follows: true TALM has concluded the pre-caching process. If pre-caching fails for any cluster, the update fails for that cluster but proceeds for all other clusters. A message informs you if pre-caching has failed for any clusters. false Pre-caching is still in progress for one or more clusters or has failed for all clusters. After a successful pre-caching process, you can start remediating policies. The remediation actions start when the enable field is set to true . If there is a pre-caching failure on a cluster, the upgrade fails for that cluster. The upgrade process continues for all other clusters that have a successful pre-cache. The pre-caching process can be in the following statuses: NotStarted This is the initial state all clusters are automatically assigned to on the first reconciliation pass of the ClusterGroupUpgrade CR. In this state, TALM deletes any pre-caching namespace and hub view resources of spoke clusters that remain from incomplete updates. TALM then creates a new ManagedClusterView resource for the spoke pre-caching namespace to verify its deletion in the PrecachePreparing state. PreparingToStart Cleaning up any remaining resources from incomplete updates is in progress. Starting Pre-caching job prerequisites and the job are created. Active The job is in "Active" state. Succeeded The pre-cache job succeeded. PrecacheTimeout The artifact pre-caching is partially done. UnrecoverableError The job ends with a non-zero exit code. 12.7.1. Using the container image pre-cache filter The pre-cache feature typically downloads more images than a cluster needs for an update. You can control which pre-cache images are downloaded to a cluster. This decreases download time, and saves bandwidth and storage. You can see a list of all images to be downloaded using the following command: USD oc adm release info <ocp-version> The following ConfigMap example shows how you can exclude images using the excludePrecachePatterns field. apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba 1 TALM excludes all images with names that include any of the patterns listed here. 12.7.2. Creating a ClusterGroupUpgrade CR with pre-caching For single-node OpenShift, the pre-cache feature allows the required container images to be present on the spoke cluster before the update starts. Note For pre-caching, TALM uses the spec.remediationStrategy.timeout value from the ClusterGroupUpgrade CR. You must set a timeout value that allows sufficient time for the pre-caching job to complete. When you enable the ClusterGroupUpgrade CR after pre-caching has completed, you can change the timeout value to a duration that is appropriate for the update. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Procedure Save the contents of the ClusterGroupUpgrade CR with the preCaching field set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 1 The preCaching field is set to true , which enables TALM to pull the container images before starting the update. When you want to start pre-caching, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check if the ClusterGroupUpgrade CR exists in the hub cluster by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1 1 The CR is created. Check the status of the pre-caching task by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "conditions": [ { "lastTransitionTime": "2022-01-27T19:07:24Z", "message": "Precaching is required and not done", "reason": "InProgress", "status": "False", "type": "PrecachingSucceeded" }, { "lastTransitionTime": "2022-01-27T19:07:34Z", "message": "Pre-caching spec is valid and consistent", "reason": "PrecacheSpecIsWellFormed", "status": "True", "type": "PrecacheSpecValid" } ], "precaching": { "clusters": [ "cnfdb1" 1 "cnfdb2" ], "spec": { "platformImage": "image.example.io"}, "status": { "cnfdb1": "Active" "cnfdb2": "Succeeded"} } } 1 Displays the list of identified clusters. Check the status of the pre-caching job by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache Example output NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s Check the status of the ClusterGroupUpgrade CR by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output "conditions": [ { "lastTransitionTime": "2022-01-27T19:30:41Z", "message": "The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies", "reason": "UpgradeCompleted", "status": "True", "type": "Ready" }, { "lastTransitionTime": "2022-01-27T19:28:57Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingSucceeded" 1 } 1 The pre-cache tasks are done. 12.8. Troubleshooting the Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) is an OpenShift Container Platform Operator that remediates RHACM policies. When issues occur, use the oc adm must-gather command to gather details and logs and to take steps in debugging the issues. For more information about related topics, see the following documentation: Red Hat Advanced Cluster Management for Kubernetes 2.4 Support Matrix Red Hat Advanced Cluster Management Troubleshooting The "Troubleshooting Operator issues" section 12.8.1. General troubleshooting You can determine the cause of the problem by reviewing the following questions: Is the configuration that you are applying supported? Are the RHACM and the OpenShift Container Platform versions compatible? Are the TALM and RHACM versions compatible? Which of the following components is causing the problem? Section 12.8.3, "Managed policies" Section 12.8.4, "Clusters" Section 12.8.5, "Remediation Strategy" Section 12.8.6, "Topology Aware Lifecycle Manager" To ensure that the ClusterGroupUpgrade configuration is functional, you can do the following: Create the ClusterGroupUpgrade CR with the spec.enable field set to false . Wait for the status to be updated and go through the troubleshooting questions. If everything looks as expected, set the spec.enable field to true in the ClusterGroupUpgrade CR. Warning After you set the spec.enable field to true in the ClusterUpgradeGroup CR, the update procedure starts and you cannot edit the CR's spec fields anymore. 12.8.2. Cannot modify the ClusterUpgradeGroup CR Issue You cannot edit the ClusterUpgradeGroup CR after enabling the update. Resolution Restart the procedure by performing the following steps: Remove the old ClusterGroupUpgrade CR by running the following command: USD oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name> Check and fix the existing issues with the managed clusters and policies. Ensure that all the clusters are managed clusters and available. Ensure that all the policies exist and have the spec.remediationAction field set to inform . Create a new ClusterGroupUpgrade CR with the correct configurations. USD oc apply -f <ClusterGroupUpgradeCR_YAML> 12.8.3. Managed policies Checking managed policies on the system Issue You want to check if you have the correct managed policies on the system. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}' Example output ["group-du-sno-validator-du-validator-policy", "policy2-common-nto-sub-policy", "policy3-common-ptp-sub-policy"] Checking remediationAction mode Issue You want to check if the remediationAction field is set to inform in the spec of the managed policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h Checking policy compliance state Issue You want to check the compliance state of policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h 12.8.4. Clusters Checking if managed clusters are present Issue You want to check if the clusters in the ClusterGroupUpgrade CR are managed clusters. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h Alternatively, check the TALM manager logs: Get the name of the TALM manager by running the following command: USD oc get pod -n openshift-operators Example output NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m Check the TALM manager logs by running the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 The error message shows that the cluster is not a managed cluster. Checking if managed clusters are available Issue You want to check if the managed clusters specified in the ClusterGroupUpgrade CR are available. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2 1 2 The value of the AVAILABLE field is True for the managed clusters. Checking clusterLabelSelector Issue You want to check if the clusterLabelSelector field specified in the ClusterGroupUpgrade CR matches at least one of the managed clusters. Resolution Run the following command: USD oc get managedcluster --selector=upgrade=true 1 1 The label for the clusters you want to update is upgrade:true . Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Checking if canary clusters are present Issue You want to check if the canary clusters are present in the list of clusters. Example ClusterGroupUpgrade CR spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true Resolution Run the following commands: USD oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}' Example output ["spoke1", "spoke3"] Check if the canary clusters are present in the list of clusters that match clusterLabelSelector labels by running the following command: USD oc get managedcluster --selector=upgrade=true Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Note A cluster can be present in spec.clusters and also be matched by the spec.clusterLabelSelector label. Checking the pre-caching status on spoke clusters Check the status of pre-caching by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache 12.8.5. Remediation Strategy Checking if remediationStrategy is present in the ClusterGroupUpgrade CR Issue You want to check if the remediationStrategy is present in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}' Example output {"maxConcurrency":2, "timeout":240} Checking if maxConcurrency is specified in the ClusterGroupUpgrade CR Issue You want to check if the maxConcurrency is specified in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}' Example output 2 12.8.6. Topology Aware Lifecycle Manager Checking condition message and status in the ClusterGroupUpgrade CR Issue You want to check the value of the status.conditions field in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.conditions}' Example output {"lastTransitionTime":"2022-02-17T22:25:28Z", "message":"Missing managed policies:[policyList]", "reason":"NotAllManagedPoliciesExist", "status":"False", "type":"Validated"} Checking if status.remediationPlan was computed Issue You want to check if status.remediationPlan is computed. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}' Example output [["spoke2", "spoke3"]] Errors in the TALM manager container Issue You want to check the logs of the manager container of TALM. Resolution Run the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 Displays the error. Clusters are not compliant to some policies after a ClusterGroupUpgrade CR has completed Issue The policy compliance status that TALM uses to decide if remediation is needed has not yet fully updated for all clusters. This may be because: The CGU was run too soon after a policy was created or updated. The remediation of a policy affects the compliance of subsequent policies in the ClusterGroupUpgrade CR. Resolution Create and apply a new ClusterGroupUpdate CR with the same specification. Auto-created ClusterGroupUpgrade CR in the GitOps ZTP workflow has no managed policies Issue If there are no policies for the managed cluster when the cluster becomes Ready , a ClusterGroupUpgrade CR with no policies is auto-created. Upon completion of the ClusterGroupUpgrade CR, the managed cluster is labeled as ztp-done . If the PolicyGenerator or PolicyGenTemplate CRs were not pushed to the Git repository within the required time after SiteConfig resources were pushed, this might result in no policies being available for the target cluster when the cluster became Ready . Resolution Verify that the policies you want to apply are available on the hub cluster, then create a ClusterGroupUpgrade CR with the required policies. You can either manually create the ClusterGroupUpgrade CR or trigger auto-creation again. To trigger auto-creation of the ClusterGroupUpgrade CR, remove the ztp-done label from the cluster and delete the empty ClusterGroupUpgrade CR that was previously created in the zip-install namespace. Pre-caching has failed Issue Pre-caching might fail for one of the following reasons: There is not enough free space on the node. For a disconnected environment, the pre-cache image has not been properly mirrored. There was an issue when creating the pod. Resolution To check if pre-caching has failed due to insufficient space, check the log of the pre-caching pod in the node. Find the name of the pod using the following command: USD oc get pods -n openshift-talo-pre-cache Check the logs to see if the error is related to insufficient space using the following command: USD oc logs -n openshift-talo-pre-cache <pod name> If there is no log, check the pod status using the following command: USD oc describe pod -n openshift-talo-pre-cache <pod name> If the pod does not exist, check the job status to see why it could not create a pod using the following command: USD oc describe job -n openshift-talo-pre-cache pre-cache Additional resources OpenShift Container Platform Troubleshooting Operator Issues Updating managed policies with Topology Aware Lifecycle Manager About the PolicyGenerator CRD | [
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f talm-subscription.yaml",
"oc get csv -n openshift-operators",
"NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.18.x Topology Aware Lifecycle Manager 4.18.x Succeeded",
"oc get deploy -n openshift-operators",
"NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s",
"spec remediationStrategy: maxConcurrency: 1 timeout: 240",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: \"\" deleteClusterLabels: upgrade-running: \"\" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: \"\" clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status:",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: \"False\" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: \"True\" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}",
"oc apply -f <name>.yaml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.18.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.18 desiredUpdate: version: 4.4.18.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.18.4 remediationAction: inform severity: low remediationAction: inform",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5",
"oc create -f cgu-1.yaml",
"oc get cgu --all-namespaces",
"NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Not enabled\", 1 \"reason\": \"NotEnabled\", \"status\": \"False\", \"type\": \"Progressing\" } ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"All selected clusters are valid\", \"reason\": \"ClusterSelectionCompleted\", \"status\": \"True\", \"type\": \"ClustersSelected\" }, { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"Completed validation\", \"reason\": \"ValidationCompleted\", \"status\": \"True\", \"type\": \"Validated\" }, { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" } ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchRemediationProgress\": { \"spoke1\": { \"policyIndex\": 1, \"state\": \"InProgress\" }, \"spoke2\": { \"policyIndex\": 1, \"state\": \"InProgress\" } }, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"startedAt\": \"2022-02-25T15:54:16Z\" } }",
"get policies -A",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE spoke1 default.policy1-common-cluster-version-policy enforce Compliant 18m spoke1 default.policy2-common-nto-sub-policy enforce NonCompliant 18m spoke2 default.policy1-common-cluster-version-policy enforce Compliant 18m spoke2 default.policy2-common-nto-sub-policy enforce NonCompliant 18m spoke5 default.policy3-common-ptp-sub-policy inform NonCompliant 18m spoke5 default.policy4-common-sriov-sub-policy inform NonCompliant 18m spoke6 default.policy3-common-ptp-sub-policy inform NonCompliant 18m spoke6 default.policy4-common-sriov-sub-policy inform NonCompliant 18m default policy1-common-ptp-sub-policy inform Compliant 18m default policy2-common-sriov-sub-policy inform NonCompliant 18m default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m",
"export KUBECONFIG=<cluster_kubeconfig_absolute_path>",
"oc get subs -A | grep -i <subscription_name>",
"NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.18.5 True True 43s Working towards 4.4.18.7: 71 of 735 done (9% complete)",
"oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"",
"oc get installplan -n <subscription_namespace>",
"NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1",
"oc get csv -n <operator_namespace>",
"NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded",
"oc adm release info <ocp-version>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f clustergroupupgrades-group-du.yaml",
"oc get cgu -A",
"NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"InProgress\", \"status\": \"False\", \"type\": \"PrecachingSucceeded\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 1 \"cnfdb2\" ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\" \"cnfdb2\": \"Succeeded\"} } }",
"oc get jobs,pods -n openshift-talo-pre-cache",
"NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingSucceeded\" 1 }",
"oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>",
"oc apply -f <ClusterGroupUpgradeCR_YAML>",
"oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'",
"[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-nto-sub-policy\", \"policy3-common-ptp-sub-policy\"]",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h",
"oc get pod -n openshift-operators",
"NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2",
"oc get managedcluster --selector=upgrade=true 1",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true",
"oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'",
"[\"spoke1\", \"spoke3\"]",
"oc get managedcluster --selector=upgrade=true",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"oc get jobs,pods -n openshift-talo-pre-cache",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'",
"{\"maxConcurrency\":2, \"timeout\":240}",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'",
"2",
"oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'",
"{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"Missing managed policies:[policyList]\", \"reason\":\"NotAllManagedPoliciesExist\", \"status\":\"False\", \"type\":\"Validated\"}",
"oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'",
"[[\"spoke2\", \"spoke3\"]]",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem",
"oc get pods -n openshift-talo-pre-cache",
"oc logs -n openshift-talo-pre-cache <pod name>",
"oc describe pod -n openshift-talo-pre-cache <pod name>",
"oc describe job -n openshift-talo-pre-cache pre-cache"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/edge_computing/cnf-talm-for-cluster-updates |
GitOps | GitOps Red Hat Advanced Cluster Management for Kubernetes 2.11 GitOps Red Hat Advanced Cluster Management for Kubernetes Team | [
"apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-cluster-sample namespace: dev spec: argoServer: cluster: local-cluster argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters 1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: application-set-admin rules: - apiGroups: - argoproj.io resources: - applicationsets verbs: - get - list - watch - update - delete - deletecollection - patch",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists",
"cannot create resource \"services\" in API group \"\" in the namespace \"mortgage\",deployments.apps is forbidden: User \"system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller\"",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: argo-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin",
"apiVersion: v1 kind: Namespace metadata: name: mortgage2 labels: argocd.argoproj.io/managed-by: openshift-gitops",
"apiVersion: argoproj.io/v1alpha1 kind: `ApplicationSet` metadata: name: guestbook-allclusters-app-set namespace: openshift-gitops spec: generators: - clusterDecisionResource: configMapRef: ocm-placement-generator labelSelector: matchLabels: cluster.open-cluster-management.io/placement: aws-app-placement requeueAfterSeconds: 30 template: metadata: annotations: apps.open-cluster-management.io/ocm-managed-cluster: '{{name}}' 1 apps.open-cluster-management.io/ocm-managed-cluster-app-namespace: openshift-gitops argocd.argoproj.io/skip-reconcile: \"true\" 2 labels: apps.open-cluster-management.io/pull-to-ocm-managed-cluster: \"true\" 3 name: '{{name}}-guestbook-app' spec: destination: namespace: guestbook server: https://kubernetes.default.svc project: default sources: [ { repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: main path: guestbook } ] syncPolicy: automated: {} syncOptions: - CreateNamespace=true",
"NAMESPACE NAME READY STATUS open-cluster-management multicluster-integrations-7c46498d9-fqbq4 3/3 Running",
"apiVersion: apps.open-cluster-management.io/v1alpha1 kind: MulticlusterApplicationSetReport metadata: labels: apps.open-cluster-management.io/hosting-applicationset: openshift-gitops.guestbook-allclusters-app-set name: guestbook-allclusters-app-set namespace: openshift-gitops statuses: clusterConditions: - cluster: cluster1 conditions: - message: 'Failed sync attempt: one or more objects failed to apply, reason: services is forbidden: User \"system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller\" cannot create resource \"services\" in API group \"\" in the namespace \"guestbook\",deployments.apps is forbidden: User <name> cannot create resource \"deployments\" in API group \"apps\" in the namespace \"guestboo...' type: SyncError healthStatus: Missing syncStatus: OutOfSync - cluster: pcluster1 healthStatus: Progressing syncStatus: Synced - cluster: pcluster2 healthStatus: Progressing syncStatus: Synced summary: clusters: \"3\" healthy: \"0\" inProgress: \"2\" notHealthy: \"3\" notSynced: \"1\" synced: \"2\"",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin rules: - verbs: - get - list - watch - create - update - patch - delete apiGroups: - policy.open-cluster-management.io resources: - policies - policysets - placementbindings - verbs: - get - list - watch - create - update - patch - delete apiGroups: - apps.open-cluster-management.io resources: - placementrules - verbs: - get - list - watch - create - update - patch - delete apiGroups: - cluster.open-cluster-management.io resources: - placements - placements/status - placementdecisions - placementdecisions/status",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: openshift-gitops-policy-admin",
"-n openshift-gitops edit argocd openshift-gitops",
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: kustomizeBuildOptions: --enable-alpha-plugins repo: env: - name: KUSTOMIZE_PLUGIN_HOME value: /etc/kustomize/plugin initContainers: - args: - -c - cp /policy-generator/PolicyGenerator-not-fips-compliant /policy-generator-tmp/PolicyGenerator command: - /bin/bash image: registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<version> name: policy-generator-install volumeMounts: - mountPath: /policy-generator-tmp name: policy-generator volumeMounts: - mountPath: /etc/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator name: policy-generator volumes: - emptyDir: {} name: policy-generator",
"image: '{{ (index (lookup \"apps/v1\" \"Deployment\" \"open-cluster-management\" \"multicluster-operators-hub-subscription\").spec.template.spec.containers 0).image }}'",
"env: - name: POLICY_GEN_ENABLE_HELM value: \"true\"",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: openshift-gitops-policy-admin",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-operators spec: channel: stable name: openshift-gitops-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: install-openshift-gitops policyDefaults: namespace: policies placement: clusterSelectors: vendor: \"OpenShift\" remediationAction: enforce policies: - name: install-openshift-gitops manifests: - path: openshift-gitops-subscription.yaml",
"generators: - policy-generator-config.yaml",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-openshift-gitops namespace: policies spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: vendor operator: In values: - OpenShift --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-openshift-gitops namespace: policies placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-openshift-gitops subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-openshift-gitops --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: install-openshift-gitops namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-openshift-gitops spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-operators spec: channel: stable name: openshift-gitops-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low",
"apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: <clusterpermission-msa-subject-sample> namespace: <managed cluster> spec: roles: - namespace: default rules: - apiGroups: [\"apps\"] resources: [\"deployments\"] verbs: [\"get\", \"list\", \"create\", \"update\", \"delete\", \"patch\"] - apiGroups: [\"\"] resources: [\"configmaps\", \"secrets\", \"pods\", \"podtemplates\", \"persistentvolumeclaims\", \"persistentvolumes\"] verbs: [\"get\", \"update\", \"list\", \"create\", \"delete\", \"patch\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"*\"] verbs: [\"list\"] - namespace: mortgage rules: - apiGroups: [\"apps\"] resources: [\"deployments\"] verbs: [\"get\", \"list\", \"create\", \"update\", \"delete\", \"patch\"] - apiGroups: [\"\"] resources: [\"configmaps\", \"secrets\", \"pods\", \"services\", \"namespace\"] verbs: [\"get\", \"update\", \"list\", \"create\", \"delete\", \"patch\"] clusterRole: rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"get\", \"list\"] roleBindings: - namespace: default roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> - namespace: mortgage roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> clusterRoleBinding: subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample>",
"--- apiVersion: apps.open-cluster-management.io/v1beta1 metadata: name: argo-acm-importer namespace: openshift-gitops spec: managedServiceAccountRef: <managed-sa-sample> argoServer: cluster: notused argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops",
"% oc get secrets -n openshift-gitops <managed cluster-managed-sa-sample-cluster-secret> NAME TYPE DATA AGE <managed cluster-managed-sa-sample-cluster-secret> Opaque 3 4m2s",
"When the GitOpsCluster resource is updated with the `managedServiceAccountRef`, each managed cluster in the placement of this GitOpsCluster needs to have the service account. If you have several managed clusters, it becomes tedious for you to create the managed service account and cluster permission for each managed cluster. You can simply this process by using a policy to create the managed service account and cluster permission for all your managed clusters",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-gitops namespace: openshift-gitops annotations: policy.open-cluster-management.io/standards: NIST-CSF policy.open-cluster-management.io/categories: PR.PT Protective Technology policy.open-cluster-management.io/controls: PR.PT-3 Least Functionality spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gitops-sub spec: pruneObjectBehavior: None remediationAction: enforce severity: low object-templates-raw: | {{ range USDplacedec := (lookup \"cluster.open-cluster-management.io/v1beta1\" \"PlacementDecision\" \"openshift-gitops\" \"\" \"cluster.open-cluster-management.io/placement=aws-app-placement\").items }} {{ range USDclustdec := USDplacedec.status.decisions }} - complianceType: musthave objectDefinition: apiVersion: authentication.open-cluster-management.io/v1alpha1 kind: ManagedServiceAccount metadata: name: <managed-sa-sample> namespace: {{ USDclustdec.clusterName }} spec: rotation: {} - complianceType: musthave objectDefinition: apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: <clusterpermission-msa-subject-sample> namespace: {{ USDclustdec.clusterName }} spec: roles: - namespace: default rules: - apiGroups: [\"apps\"] resources: [\"deployments\"] verbs: [\"get\", \"list\", \"create\", \"update\", \"delete\"] - apiGroups: [\"\"] resources: [\"configmaps\", \"secrets\", \"pods\", \"podtemplates\", \"persistentvolumeclaims\", \"persistentvolumes\"] verbs: [\"get\", \"update\", \"list\", \"create\", \"delete\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"*\"] verbs: [\"list\"] - namespace: mortgage rules: - apiGroups: [\"apps\"] resources: [\"deployments\"] verbs: [\"get\", \"list\", \"create\", \"update\", \"delete\"] - apiGroups: [\"\"] resources: [\"configmaps\", \"secrets\", \"pods\", \"services\", \"namespace\"] verbs: [\"get\", \"update\", \"list\", \"create\", \"delete\"] clusterRole: rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"get\", \"list\"] roleBindings: - namespace: default roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> - namespace: mortgage roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> clusterRoleBinding: subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> {{ end }} {{ end }} --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-gitops namespace: openshift-gitops placementRef: name: lc-app-placement kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: policy-gitops kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: lc-app-placement namespace: openshift-gitops spec: numberOfClusters: 1 predicates: - requiredClusterSelector: labelSelector: matchLabels: name: local-cluster"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/gitops/index |
Part III. Post deployment operations | Part III. Post deployment operations | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/post_deployment_operations |
Chapter 6. Network Configuration | Chapter 6. Network Configuration This chapter provides an introduction to the common networking configurations used by libvirt-based guest virtual machines. Red Hat Enterprise Linux 7 supports the following networking setups for virtualization: virtual networks using Network Address Translation ( NAT ) directly allocated physical devices using PCI device assignment directly allocated virtual functions using PCIe SR-IOV bridged networks You must enable NAT, network bridging or directly assign a PCI device to allow external hosts access to network services on guest virtual machines. 6.1. Network Address Translation (NAT) with libvirt One of the most common methods for sharing network connections is to use Network Address Translation (NAT) forwarding (also known as virtual networks). Host Configuration Every standard libvirt installation provides NAT-based connectivity to virtual machines as the default virtual network. Verify that it is available with the virsh net-list --all command. If it is missing, the following can be used in the XML configuration file (such as /etc/libvirtd/qemu/myguest.xml) for the guest: The default network is defined from /etc/libvirt/qemu/networks/default.xml Mark the default network to automatically start: Start the default network: Once the libvirt default network is running, you will see an isolated bridge device. This device does not have any physical interfaces added. The new device uses NAT and IP forwarding to connect to the physical network. Do not add new interfaces. libvirt adds iptables rules which allow traffic to and from guest virtual machines attached to the virbr0 device in the INPUT , FORWARD , OUTPUT and POSTROUTING chains. libvirt then attempts to enable the ip_forward parameter. Some other applications may disable ip_forward , so the best option is to add the following to /etc/sysctl.conf . Guest Virtual Machine Configuration Once the host configuration is complete, a guest virtual machine can be connected to the virtual network based on its name. To connect a guest to the 'default' virtual network, the following can be used in the XML configuration file (such as /etc/libvirtd/qemu/myguest.xml ) for the guest: Note Defining a MAC address is optional. If you do not define one, a MAC address is automatically generated and used as the MAC address of the bridge device used by the network. Manually setting the MAC address may be useful to maintain consistency or easy reference throughout your environment, or to avoid the very small chance of a conflict. | [
"virsh net-list --all Name State Autostart ----------------------------------------- default active yes",
"ll /etc/libvirt/qemu/ total 12 drwx------. 3 root root 4096 Nov 7 23:02 networks -rw-------. 1 root root 2205 Nov 20 01:20 r6.4.xml -rw-------. 1 root root 2208 Nov 8 03:19 r6.xml",
"virsh net-autostart default Network default marked as autostarted",
"virsh net-start default Network default started",
"brctl show bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 yes",
"net.ipv4.ip_forward = 1",
"<interface type='network'> <source network='default'/> </interface>",
"<interface type='network'> <source network='default'/> <mac address='00:16:3e:1a:b3:4a'/> </interface>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-network_configuration |
Appendix B. Image configuration parameters | Appendix B. Image configuration parameters You can use the following keys with the --property option for the glance image-create , glance image-create-via-import , and glance image-update commands. Table B.1. Property keys Specific to Key Description Supported values All architecture The CPU architecture that must be supported by the hypervisor. For example, x86_64 , arm , or ppc64 . Run uname -m to get the architecture of a machine. aarch - ARM 64-bit alpha - DEC 64-bit RISC armv7l - ARM Cortex-A7 MPCore cris - Ethernet, Token Ring, AXis-Code Reduced Instruction Set i686 - Intel sixth-generation x86 (P6 micro architecture) ia64 - Itanium lm32 - Lattice Micro32 m68k - Motorola 68000 microblaze - Xilinx 32-bit FPGA (Big Endian) microblazeel - Xilinx 32-bit FPGA (Little Endian) mips - MIPS 32-bit RISC (Big Endian) mipsel - MIPS 32-bit RISC (Little Endian) mips64 - MIPS 64-bit RISC (Big Endian) mips64el - MIPS 64-bit RISC (Little Endian) openrisc - OpenCores RISC parisc - HP Precision Architecture RISC parisc64 - HP Precision Architecture 64-bit RISC ppc - PowerPC 32-bit ppc64 - PowerPC 64-bit ppcemb - PowerPC (Embedded 32-bit) s390 - IBM Enterprise Systems Architecture/390 s390x - S/390 64-bit sh4 - SuperH SH-4 (Little Endian) sh4eb - SuperH SH-4 (Big Endian) sparc - Scalable Processor Architecture, 32-bit sparc64 - Scalable Processor Architecture, 64-bit unicore32 - Microprocessor Research and Development Center RISC Unicore32 x86_64 - 64-bit extension of IA-32 xtensa - Tensilica Xtensa configurable microprocessor core xtensaeb - Tensilica Xtensa configurable microprocessor core (Big Endian) All hypervisor_type The hypervisor type. kvm , vmware All instance_uuid For snapshot images, this is the UUID of the server used to create this image. Valid server UUID All kernel_id The ID of an image stored in the Image Service that should be used as the kernel when booting an AMI-style image. Valid image ID All os_distro The common name of the operating system distribution in lowercase. arch - Arch Linux. Do not use archlinux or org.archlinux . centos - Community Enterprise Operating System. Do not use org.centos or CentOS . debian - Debian. Do not use Debian or org.debian . fedora - Fedora. Do not use Fedora , org.fedora , or org.fedoraproject . freebsd - FreeBSD. Do not use org.freebsd , freeBSD , or FreeBSD . gentoo - Gentoo Linux. Do not use Gentoo or org.gentoo . mandrake - Mandrakelinux (MandrakeSoft) distribution. Do not use mandrakelinux or MandrakeLinux . mandriva - Mandriva Linux. Do not use mandrivalinux . mes - Mandriva Enterprise Server. Do not use mandrivaent or mandrivaES . msdos - Microsoft Disc Operating System. Do not use ms-dos . netbsd - NetBSD. Do not use NetBSD or org.netbsd . netware - Novell NetWare. Do not use novell or NetWare . openbsd - OpenBSD. Do not use OpenBSD or org.openbsd . opensolaris - OpenSolaris. Do not use OpenSolaris or org.opensolaris . opensuse - openSUSE. Do not use suse , SuSE , or org.opensuse . rhel - Red Hat Enterprise Linux. Do not use redhat , RedHat , or com.redhat . sled - SUSE Linux Enterprise Desktop. Do not use com.suse . ubuntu - Ubuntu. Do not use Ubuntu , com.ubuntu , org.ubuntu , or canonical . windows - Microsoft Windows. Do not use com.microsoft.server . All os_version The operating system version as specified by the distributor. Version number (for example, "11.10") All ramdisk_id The ID of image stored in the Image Service that should be used as the ramdisk when booting an AMI-style image. Valid image ID All vm_mode The virtual machine mode. This represents the host/guest ABI (application binary interface) used for the virtual machine. hvm -Fully virtualized. This is the mode used by QEMU and KVM. libvirt API driver hw_cdrom_bus Specifies the type of disk controller to attach CD-ROM devices to. scsi , virtio , ide , or usb . If you specify iscsi , you must set the hw_scsi_model parameter to virtio-scsi . libvirt API driver hw_disk_bus Specifies the type of disk controller to attach disk devices to. scsi , virtio , ide , or usb . Note that if using iscsi , the hw_scsi_model needs to be set to virtio-scsi . libvirt API driver hw_firmware_type Specifies the type of firmware to use to boot the instance. Set to one of the following valid values: bios uefi libvirt API driver hw_machine_type Enables booting an ARM system using the specified machine type. If an ARM image is used and its machine type is not explicitly specified, then Compute uses the virt machine type as the default for ARMv7 and AArch64. Valid types can be viewed by using the virsh capabilities command. The machine types are displayed in the machine tag. libvirt API driver hw_numa_nodes Number of NUMA nodes to expose to the instance (does not override flavor definition). Integer. libvirt API driver hw_numa_cpus.0 Mapping of vCPUs N-M to NUMA node 0 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_cpus.1 Mapping of vCPUs N-M to NUMA node 1 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_mem.0 Mapping N MB of RAM to NUMA node 0 (does not override flavor definition). Integer libvirt API driver hw_numa_mem.1 Mapping N MB of RAM to NUMA node 1 (does not override flavor definition). Integer libvirt API driver hw_pci_numa_affinity_policy Specifies the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Set to one of the following valid values: required : The Compute service creates an instance that requests a PCI device only when at least one of the NUMA nodes of the instance has affinity with the PCI device. This option provides the best performance. preferred : The Compute service attempts a best effort selection of PCI devices based on NUMA affinity. If affinity is not possible, then the Compute service schedules the instance on a NUMA node that has no affinity with the PCI device. legacy : (Default) The Compute service creates instances that request a PCI device in one of the following cases: The PCI device has affinity with at least one of the NUMA nodes. The PCI devices do not provide information about their NUMA affinities. libvirt API driver hw_qemu_guest_agent Guest agent support. If set to yes , and if qemu-ga is also installed, file systems can be quiesced (frozen) and snapshots created automatically. yes / no libvirt API driver hw_rng_model Adds a random number generator (RNG) device to instances launched with this image. The instance flavor enables the RNG device by default. To disable the RNG device, the cloud administrator must set hw_rng:allowed to False on the flavor. The default entropy source is /dev/random . To specify a hardware RNG device, set rng_dev_path to /dev/hwrng in your Compute environment file. virtio , or other supported device. libvirt API driver hw_scsi_model Enables the use of VirtIO SCSI (virtio-scsi) to provide block device access for compute instances; by default, instances use VirtIO Block (virtio-blk). VirtIO SCSI is a para-virtualized SCSI controller device that provides improved scalability and performance, and supports advanced SCSI hardware. virtio-scsi libvirt API driver hw_tpm_model Set to the model of TPM device to use. Ignored if hw:tpm_version is not configured. tpm-tis : (Default) TPM Interface Specification. tpm-crb : Command-Response Buffer. Compatible only with TPM version 2.0. libvirt API driver hw_tpm_version Set to the version of TPM to use. TPM version 2.0 is the only supported version. 2.0 libvirt API driver hw_video_model The video device driver for the display device to use in virtual machine instances. Set to one of the following values to specify the supported driver to use: virtio - (Default) Recommended Driver for the virtual machine display device, supported by most architectures. The VirtIO GPU driver is included in RHEL-7 and later, and Linux kernel versions 4.4 and later. If an instance kernel has the VirtIO GPU driver, then the instance can use all the VirtIO GPU features. If an instance kernel does not have the VirtIO GPU driver, the VirtIO GPU device gracefully falls back to VGA compatibility mode, which provides a working display for the instance. qxl - Deprecated Driver for Spice or noVNC environments that is no longer maintained. cirrus - Legacy driver, supported only for backward compatibility. Do not use for new instances. vga - Use this driver for IBM Power environments. gop - Not supported for QEMU/KVM environments. xen - Not supported for KVM environments. vmvga - Legacy driver, do not use. none - Use this value to disable emulated graphics or video in virtual GPU (vGPU) instances where the driver is configured separately. libvirt API driver hw_video_ram Maximum RAM for the video image. Used only if a hw_video:ram_max_mb value has been set in the flavor's extra_specs and that value is higher than the value set in hw_video_ram . Integer in MB (for example, 64 ) libvirt API driver hw_watchdog_action Enables a virtual hardware watchdog device that carries out the specified action if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. disabled-The device is not attached. Allows the user to disable the watchdog for the image, even if it has been enabled using the image's flavor. The default value for this parameter is disabled. reset-Forcefully reset the guest. poweroff-Forcefully power off the guest. pause-Pause the guest. none-Only enable the watchdog; do nothing if the server hangs. libvirt API driver os_command_line The kernel command line to be used by the libvirt driver, instead of the default. For Linux Containers(LXC), the value is used as arguments for initialization. This key is valid only for Amazon kernel, ramdisk, or machine images (aki, ari, or ami). libvirt API driver os_secure_boot Use to create an instance that is protected with UEFI Secure Boot. Set to one of the following valid values: required : Enables Secure Boot for instances launched with this image. The instance is only launched if the Compute service locates a host that can support Secure Boot. If no host is found, the Compute service returns a "No valid host" error. disabled : Disables Secure Boot for instances launched with this image. Disabled by default. optional : Enables Secure Boot for instances launched with this image only when the Compute service determines that the host can support Secure Boot. libvirt API driver and VMware API driver hw_vif_model Specifies the model of virtual network interface device to use. The valid options depend on the configured hypervisor. KVM and QEMU: e1000, ne2k_pci, pcnet, rtl8139, and virtio. VMware: e1000, e1000e, VirtualE1000, VirtualE1000e, VirtualPCNet32, VirtualSriovEthernetCard, and VirtualVmxnet. Xen: e1000, netfront, ne2k_pci, pcnet, and rtl8139. VMware API driver vmware_adaptertype The virtual SCSI or IDE controller used by the hypervisor. lsiLogic , busLogic , or ide VMware API driver vmware_ostype A VMware GuestID which describes the operating system installed in the image. This value is passed to the hypervisor when creating a virtual machine. If not specified, the key defaults to otherGuest . For more information, see Images with VMware vSphere . VMware API driver vmware_image_version Currently unused. 1 XenAPI driver auto_disk_config If true, the root partition on the disk is automatically resized before the instance boots. This value is only taken into account by the Compute service when using a Xen-based hypervisor with the XenAPI driver. The Compute service will only attempt to resize if there is a single partition on the image, and only if the partition is in ext3 or ext4 format. true / false libvirt API driver and XenAPI driver os_type The operating system installed on the image. The XenAPI driver contains logic that takes different actions depending on the value of the os_type parameter of the image. For example, for os_type=windows images, it creates a FAT32-based swap partition instead of a Linux swap partition, and it limits the injected host name to less than 16 characters. linux or windows | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_images/assembly_image-config-parameters_glance-creating-images |
Chapter 8. Working with clusters | Chapter 8. Working with clusters 8.1. Viewing system event information in an OpenShift Container Platform cluster Events in OpenShift Container Platform are modeled based on events that happen to API objects in an OpenShift Container Platform cluster. 8.1.1. Understanding events Events allow OpenShift Container Platform to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 8.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal To view events in your project from the OpenShift Container Platform console. Launch the OpenShift Container Platform console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 8.1.3. List of events This section describes the events of OpenShift Container Platform. Table 8.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 8.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 8.3. Health events Name Description Unhealthy Container is unhealthy. Table 8.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 8.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 8.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 8.7. Pod worker events Name Description FailedSync Pod sync failed. Table 8.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 8.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 8.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 8.11. Network events (openshift-sdn) Name Description Starting Starting OpenShift SDN. NetworkFailed The pod's network interface has been lost and the pod will be stopped. Table 8.12. Network events (kube-proxy) Name Description NeedPods The service-port <serviceName>:<port> needs pods. Table 8.13. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 8.14. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 8.15. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 8.16. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 8.17. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 8.18. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 8.2. Estimating the number of pods your OpenShift Container Platform nodes can hold As a cluster administrator, you can use the OpenShift Cluster Capacity Tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 8.2.1. Understanding the OpenShift Cluster Capacity Tool The OpenShift Cluster Capacity Tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the OpenShift Cluster Capacity Tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Container Platform cluster. Running the tool as job inside of a pod enables you to run it multiple times without intervention. 8.2.2. Running the OpenShift Cluster Capacity Tool on the command line You can run the OpenShift Cluster Capacity Tool from the command line to estimate the number of pods that can be scheduled onto your cluster. You create a sample pod spec file, which the tool uses for estimating resource usage. The pod spec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample pod spec file: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the cluster role: USD oc create -f <file_name>.yaml For example: USD oc create -f pod-spec.yaml Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml \ --verbose where: <pod_spec>.yaml Specifies the pod spec to use. verbose Outputs a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 8.2.3. Running the OpenShift Cluster Capacity Tool as a job inside a pod Running the OpenShift Cluster Capacity Tool as a job inside of a pod allows you to run the tool multiple times without needing user intervention. You run the OpenShift Cluster Capacity Tool as a job by using a ConfigMap object. Prerequisites Download and install OpenShift Cluster Capacity Tool . Procedure To run the cluster capacity tool: Create the cluster role: Create a YAML file similar to the following: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] Create the cluster role by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create sa cluster-capacity-sa Create the service account: USD oc create sa cluster-capacity-sa -n default Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:<namespace>:cluster-capacity-sa where: <namespace> Specifies the namespace where the pod is located. Define and create the pod spec: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi Create the pod by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create -f pod.yaml Created a config map object by running the following command: USD oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml The cluster capacity analysis is mounted in a volume using a config map object named cluster-capacity-configmap to mount the input pod spec file pod.yaml into a volume test-volume at the path /test-pod . Create the job using the below example of a job specification file: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod by running the following command: USD oc create -f cluster-capacity-job.yaml Verification Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 8.3. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an OpenShift Container Platform cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 8.3.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 8.3.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 8.3.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. OpenShift Container Platform calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 8.3.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 8.3.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an OpenShift image registry. When pushing images to an OpenShift image registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an OpenShift image registry. Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quotas. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 8.3.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the OpenShift image registry. Internal and external references are not distinguished. 8.3.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 8.3.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an OpenShift image registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: USD oc create -f <limit_range_file> -n <project> 1 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 8.3.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : USD oc get limits -n demoproject NAME CREATED AT resource-limits 2020-07-15T17:14:23Z Describe the LimitRange object you are interested in, for example the resource-limits limit range: USD oc describe limits resource-limits -n demoproject Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - - 8.3.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: USD oc delete limits <limit_name> 8.4. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 8.4.1. Understanding managing application memory It is recommended to fully read the overview of how OpenShift Container Platform manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), OpenShift Container Platform allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the OpenShift Container Platform scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, OpenShift Container Platform prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 8.4.1.1. Managing application memory strategy The steps for sizing application memory on OpenShift Container Platform are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some OpenShift Container Platform clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. Additional resources Understanding compute resources and containers 8.4.2. Understanding OpenJDK settings for OpenShift Container Platform The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 8.4.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 8.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. Java-based agents can use the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 8.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the Java-based agent image, the OpenShift Container Platform Jenkins Maven agent image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 8.4.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod by running the following command: USD oc create -f <file-name>.yaml Verification Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 8.4.4. Understanding OOM kill policy OpenShift Container Platform can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 8.4.5. Understanding pod eviction OpenShift Container Platform may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 8.5. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. OpenShift Container Platform administrators can control the level of overcommit and manage container density on developer containers by using the ClusterResourceOverride Operator . Note In OpenShift Container Platform, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node . 8.5.1. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 100% overcommitted. 8.5.2. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. The Operator modifies the ratio between the requests and limits that are set on developer containers. In conjunction with a per-project limit range that specifies limits and defaults, you can achieve the desired level of overcommit. You must install the Cluster Resource Override Operator by using the OpenShift Container Platform console or CLI as shown in the following sections. After you deploy the Cluster Resource Override Operator, the Operator modifies all new pods in specific namespaces. The Operator does not edit pods that existed before you deployed the Operator. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, you can enable overrides on a per-project basis by applying the following label to the Namespace object for each project where you want the overrides to apply. For example, you can configure override so that infrastructure components are not subject to the overrides. apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. For example, a pod has the following resources limits: apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: "512Mi" cpu: "2000m" # ... The Cluster Resource Override Operator intercepts the original pod request, then overrides the resources according to the configuration set in the ClusterResourceOverride object. apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: "1" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi # ... 1 The CPU limit has been overridden to 1 because the limitCPUToMemoryPercent parameter is set to 200 in the ClusterResourceOverride object. As such, 200% of the memory limit, 512Mi in CPU terms, is 1 CPU core. 2 The CPU request is now 250m because the cpuRequestToLimit is set to 25 in the ClusterResourceOverride object. As such, 25% of the 1 CPU core is 250m. 8.5.2.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "stable" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 8.5.3. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 8.5.3.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 8.5.3.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 8.5.3.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 8.5.3.2. Understanding overcommitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 8.19. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 8.5.3.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 8.5.3.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 8.5.3.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 8.5.3.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 8.5.3.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 8.5.3.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 8.5.4. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 8.5.4.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Create or edit the namespace object file. Add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" 1 # ... 1 Setting this annotation to false disables overcommit for this namespace. 8.5.5. Additional resources Setting deployment resources . Allocating resources for nodes . 8.6. Configuring the Linux cgroup version on your nodes By default, OpenShift Container Platform uses Linux control group version 1 (cgroup v1) in your cluster. You can switch to Linux control group version 2 (cgroup v2), if needed, by editing the node.config object. Enabling cgroup v2 in OpenShift Container Platform disables all cgroup version 1 controllers and hierarchies in your cluster. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. Note If you run third-party monitoring and security agents that depend on the cgroup file system, update the agents to a version that supports cgroup v2. If you have configured cgroup v2 and run cAdvisor as a stand-alone daemon set for monitoring pods and containers, update cAdvisor to v0.43.0 or later. If you deploy Java applications, use versions that fully support cgroup v2, such as the following packages: OpenJDK / HotSpot: jdk8u372, 11.0.16, 15 and later IBM Semeru Runtimes: jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later IBM SDK Java Technology Edition Version (IBM Java): 8.0.7.15 and later 8.6.1. Configuring Linux cgroup You can enable Linux control group version 1 (cgroup v1) or Linux control group version 2 (cgroup v2) by editing the node.config object. The default is cgroup v1. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.12 or later. You are logged in to the cluster as a user with administrative privileges. Procedure Enable cgroup v2 on nodes: Edit the node.config object: USD oc edit nodes.config/cluster Edit the spec.cgroupMode parameter: Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: "v2" 1 ... 1 Specify v2 to enable cgroup v2 or v1 for cgroup v1. Verification Check the machine configs to see that the new machine configs were added: USD oc get mc Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 New machine configs are created, as expected. Check that the new kernelArguments were added to the new machine configs: USD oc describe mc <name> Example output for cgroup v1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 1 Enables cgroup v1 in systemd. 2 Disables cgroup v2. Example output for cgroup v2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: - systemd_unified_cgroup_hierarchy=1 1 - cgroup_no_v1="all" 2 - psi=1 3 1 Enables cgroup v2 in systemd. 2 Disables cgroup v1. 3 Enables the Linux Pressure Stall Information (PSI) feature. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.26.0 After a node returns to the Ready state, start a debug session for that node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check that the sys/fs/cgroup/cgroup2fs or sys/fs/cgroup/tmpfs file is present on your nodes: USD stat -c %T -f /sys/fs/cgroup Example output for cgroup v1 tmp2fs Example output for cgroup v2 cgroup2fs Additional resources OpenShift Container Platform installation overview 8.7. Enabling features using feature gates As an administrator, you can use feature gates to enable features that are not part of the default set of features. 8.7.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. This is an internal feature that most users do not need to interact with. ( ExternalCloudProvider ) Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds. Enables the Container Storage Interface (CSI). ( CSIDriverSharedResource ) CSI volumes. Enables CSI volume support for the OpenShift Container Platform build system. ( BuildCSIVolumes ) Swap memory on nodes. Enables swap memory use for OpenShift Container Platform workloads on a per-node basis. ( NodeSwap ) OpenStack Machine API Provider. This gate has no effect and is planned to be removed from this feature set in a future release. ( MachineAPIProviderOpenStack ) Insights Operator. Enables the InsightsDataGather CRD, which allows users to configure some Insights data gathering options. Pod topology spread constraints. Enables the matchLabelKeys parameter for pod topology constraints. The parameter is list of pod label keys to select the pods over which spreading will be calculated. ( MatchLabelKeysInPodTopologySpread ) Retroactive Default Storage Class. Enables OpenShift Container Platform to retroactively assign the default storage class to PVCs if there was no default storage class when the PVC was created.( RetroactiveDefaultStorageClass ) Pod disruption budget (PDB) unhealthy pod eviction policy. Enables support for specifying how unhealthy pods are considered for eviction when using PDBs. ( PDBUnhealthyPodEvictionPolicy ) Dynamic Resource Allocation API. Enables a new API for requesting and sharing resources between pods and containers. This is an internal feature that most users do not need to interact with. ( DynamicResourceAllocation ) Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. ( OpenShiftPodSecurityAdmission ) For more information about the features activated by the TechPreviewNoUpgrade feature gate, see the following topics: Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds CSI inline ephemeral volumes Swap memory on nodes Disabling the Insights Operator gather operations Enabling the Insights Operator gather operations Managing machines with the Cluster API Controlling pod placement by using pod topology spread constraints Managing the default storage class Specifying the eviction policy for unhealthy pods Pod security admission enforcement . 8.7.2. Enabling feature sets at installation You can enable feature sets for all nodes in the cluster by editing the install-config.yaml file before you deploy the cluster. Prerequisites You have an install-config.yaml file. Procedure Use the featureSet parameter to specify the name of the feature set you want to enable, such as TechPreviewNoUpgrade : Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample install-config.yaml file with an enabled feature set compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade Save the file and reference it when using the installation program to deploy the cluster. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7.3. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7.4. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.8. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator need change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are pre-defined with carefully tuned values to control the reaction of the cluster to increased latency. No need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 8.8.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you control the reaction of the cluster to latency issues without needing to determine the best values using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. Although the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates it's status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds ( node-monitor-grace-period ). The Kubernetes Controller Manager waits 40 seconds for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod on that node has the NoExecute taint, the pod is run according to tolerationSeconds . If the pod has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 8.8.2. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. | [
"oc get events [-n <project>] 1",
"oc get events -n openshift-config",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi",
"oc create -f <file_name>.yaml",
"oc create -f pod-spec.yaml",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ose-cluster-capacity",
"podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]",
"oc create -f <file_name>.yaml",
"oc create sa cluster-capacity-sa",
"oc create sa cluster-capacity-sa -n default",
"oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi",
"oc create -f <file_name>.yaml",
"oc create -f pod.yaml",
"oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml",
"apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap",
"oc create -f cluster-capacity-job.yaml",
"oc logs jobs/cluster-capacity-job",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"",
"oc create -f <limit_range_file> -n <project> 1",
"oc get limits -n demoproject",
"NAME CREATED AT resource-limits 2020-07-15T17:14:23Z",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -",
"oc delete limits <limit_name>",
"-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.",
"JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"",
"apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi",
"oc create -f <file-name>.yaml",
"oc rsh test",
"env | grep MEMORY | sort",
"MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184",
"oc rsh test",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 0",
"sed -e '' </dev/zero",
"Killed",
"echo USD?",
"137",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 1",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m",
"oc get pod test -o yaml",
"status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v2\" 1",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s",
"oc describe mc <name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: - systemd_unified_cgroup_hierarchy=1 1 - cgroup_no_v1=\"all\" 2 - psi=1 3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.26.0",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"stat -c %T -f /sys/fs/cgroup",
"tmp2fs",
"cgroup2fs",
"compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1",
"oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5",
"- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/nodes/working-with-clusters |
Project APIs | Project APIs OpenShift Container Platform 4.17 Reference guide for project APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/project_apis/index |
Chapter 4. Installing Hosts for Red Hat Virtualization | Chapter 4. Installing Hosts for Red Hat Virtualization Red Hat Virtualization supports two types of hosts: Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts . Depending on your environment, you may want to use one type only, or both. At least two hosts are required for features such as migration and high availability. See Section 4.3, "Recommended Practices for Configuring Host Networks" for networking information. Important SELinux is in enforcing mode upon installation. To verify, run getenforce . SELinux must be in enforcing mode on all hosts and Managers for your Red Hat Virtualization environment to be supported. Table 4.1. Host Types Host Type Other Names Description Red Hat Virtualization Host RHVH, thin host This is a minimal operating system based on Red Hat Enterprise Linux. It is distributed as an ISO file from the Customer Portal and contains only the packages required for the machine to act as a host. Red Hat Enterprise Linux host RHEL host, thick host Red Hat Enterprise Linux systems with the appropriate subscriptions attached can be used as hosts. Host Compatibility When you create a new data center, you can set the compatibility version. Select the compatibility version that suits all the hosts in the data center. Once set, version regression is not allowed. For a fresh Red Hat Virtualization installation, the latest compatibility version is set in the default data center and default cluster; to use an earlier compatibility version, you must create additional data centers and clusters. For more information about compatibility versions see Red Hat Virtualization Manager Compatibility in Red Hat Virtualization Life Cycle . 4.1. Red Hat Virtualization Hosts 4.1.1. Installing Red Hat Virtualization Hosts Red Hat Virtualization Host (RHVH) is a minimal operating system based on Red Hat Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a Red Hat Virtualization environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See http://cockpit-project.org/running.html for the minimum browser requirements. RHVH supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default. The host must meet the minimum host requirements . Procedure Download the RHVH ISO image from the Customer Portal: Log in to the Customer Portal at https://access.redhat.com . Click Downloads in the menu bar. Click Red Hat Virtualization . Scroll up and click Download Latest to access the product download page. Go to Hypervisor Image for RHV 4.3 and and click Download Now . Create a bootable media device. See Making Media in the Red Hat Enterprise Linux Installation Guide for more information. Start the machine on which you are installing RHVH, booting from the prepared installation media. From the boot menu, select Install RHVH 4.3 and press Enter . Note You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu. Select a language, and click Continue . Select a time zone from the Date & Time screen and click Done . Select a keyboard layout from the Keyboard screen and click Done . Select the device on which to install RHVH from the Installation Destination screen. Optionally, enable encryption. Click Done . Important Red Hat strongly recommends using the Automatically configure partitioning option. Select a network from the Network & Host Name screen and click Configure... to configure the connection details. Note To use the connection every time the system boots, select the Automatically connect to this network when it is available check box. For more information, see Edit Network Connections in the Red Hat Enterprise Linux 7 Installation Guide . Enter a host name in the Host name field, and click Done . Optionally configure Language Support , Security Policy , and Kdump . See Installing Using Anaconda in the Red Hat Enterprise Linux 7 Installation Guide for more information on each of the sections in the Installation Summary screen. Click Begin Installation . Set a root password and, optionally, create an additional user while RHVH installs. Warning Red Hat strongly recommends not creating untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities. Click Reboot to complete the installation. Note When RHVH restarts, nodectl check performs a health check on the host and displays the result when you log in on the command line. The message node status: OK or node status: DEGRADED indicates the health status. Run nodectl check to get more information. The service is enabled by default. 4.1.2. Enabling the Red Hat Virtualization Host Repository Register the system to receive updates. Red Hat Virtualization Host only requires one repository. This section provides instructions for registering RHVH with the Content Delivery Network , or with Red Hat Satellite 6 . Registering RHVH with the Content Delivery Network Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Navigate to Subscriptions , click Register System , and enter your Customer Portal user name and password. The Red Hat Virtualization Host subscription is automatically attached to the system. Click Terminal . Enable the Red Hat Virtualization Host 7 repository to allow later updates to the Red Hat Virtualization Host: Registering RHVH with Red Hat Satellite 6 Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Click Terminal . Register RHVH with Red Hat Satellite 6: 4.1.3. Advanced Installation 4.1.3.1. Custom Partitioning Custom partitioning on Red Hat Virtualization Host (RHVH) is not recommended. Red Hat strongly recommends using the Automatically configure partitioning option in the Installation Destination window. If your installation requires custom partitioning, select the I will configure partitioning option during the installation, and note that the following restrictions apply: Ensure the default LVM Thin Provisioning option is selected in the Manual Partitioning window. The following directories are required and must be on thin provisioned logical volumes: root ( / ) /home /tmp /var /var/crash /var/log /var/log/audit Important Do not create a separate partition for /usr . Doing so will cause the installation to fail. /usr must be on a logical volume that is able to change versions along with RHVH, and therefore should be left on root ( / ). For information about the required storage sizes for each partition, see Section 2.2.3, "Storage Requirements" . The /boot directory should be defined as a standard partition. The /var directory must be on a separate volume or disk. Only XFS or Ext4 file systems are supported. Configuring Manual Partitioning in a Kickstart File The following example demonstrates how to configure manual partitioning in a Kickstart file. Note If you use logvol --thinpool --grow , you must also include volgroup --reserved-space or volgroup --reserved-percent to reserve space in the volume group for the thin pool to grow. 4.1.3.2. Automating Red Hat Virtualization Host Deployment You can install Red Hat Virtualization Host (RHVH) without a physical media device by booting from a PXE server over the network with a Kickstart file that contains the answers to the installation questions. General instructions for installing from a PXE server with a Kickstart file are available in the Red Hat Enterprise Linux Installation Guide , as RHVH is installed in much the same way as Red Hat Enterprise Linux. RHVH-specific instructions, with examples for deploying RHVH with Red Hat Satellite, are described below. The automated RHVH deployment has 3 stages: Section 4.1.3.2.1, "Preparing the Installation Environment" Section 4.1.3.2.2, "Configuring the PXE Server and the Boot Loader" Section 4.1.3.2.3, "Creating and Running a Kickstart File" 4.1.3.2.1. Preparing the Installation Environment Log in to the Customer Portal . Click Downloads in the menu bar. Click Red Hat Virtualization . Scroll up and click Download Latest to access the product download page. Go to Hypervisor Image for RHV 4.3 and and click Download Now . Make the RHVH ISO image available over the network. See Installation Source on a Network in the Red Hat Enterprise Linux Installation Guide . Extract the squashfs.img hypervisor image file from the RHVH ISO: Note This squashfs.img file, located in the /tmp/usr/share/redhat-virtualization-host/image/ directory, is called redhat-virtualization-host- version_number _version.squashfs.img . It contains the hypervisor image for installation on the physical machine. It should not be confused with the /LiveOS/squashfs.img file, which is used by the Anaconda inst.stage2 option. 4.1.3.2.2. Configuring the PXE Server and the Boot Loader Configure the PXE server. See Preparing for a Network Installation in the Red Hat Enterprise Linux Installation Guide . Copy the RHVH boot images to the /tftpboot directory: Create a rhvh label specifying the RHVH boot images in the boot loader configuration: RHVH Boot Loader Configuration Example for Red Hat Satellite If you are using information from Red Hat Satellite to provision the host, you must create a global or host group level parameter called rhvh_image and populate it with the directory URL where the ISO is mounted or extracted: Make the content of the RHVH ISO locally available and export it to the network, for example, using an HTTPD server: 4.1.3.2.3. Creating and Running a Kickstart File Create a Kickstart file and make it available over the network. See Kickstart Installations in the Red Hat Enterprise Linux Installation Guide . Ensure that the Kickstart file meets the following RHV-specific requirements: The %packages section is not required for RHVH. Instead, use the liveimg option and specify the redhat-virtualization-host- version_number _version.squashfs.img file from the RHVH ISO image: Autopartitioning is highly recommended: Note Thin provisioning must be used with autopartitioning. The --no-home option does not work in RHVH because /home is a required directory. If your installation requires manual partitioning, see Section 4.1.3.1, "Custom Partitioning" for a list of limitations that apply to partitions and an example of manual partitioning in a Kickstart file. A %post section that calls the nodectl init command is required: Kickstart Example for Deploying RHVH on Its Own This Kickstart example shows you how to deploy RHVH. You can include additional commands and options as required. Kickstart Example for Deploying RHVH with Registration and Network Configuration from Satellite This Kickstart example uses information from Red Hat Satellite to configure the host network and register the host to the Satellite server. You must create a global or host group level parameter called rhvh_image and populate it with the directory URL to the squashfs.img file. ntp_server1 is also a global or host group level variable. Add the Kickstart file location to the boot loader configuration file on the PXE server: Install RHVH following the instructions in Booting from the Network Using PXE in the Red Hat Enterprise Linux Installation Guide . 4.2. Red Hat Enterprise Linux hosts 4.2.1. Installing Red Hat Enterprise Linux hosts A Red Hat Enterprise Linux host is based on a standard basic installation of Red Hat Enterprise Linux 7 on a physical server, with the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions attached. For detailed installation instructions, see the Performing a standard {enterprise-linux-shortname} installation . The host must meet the minimum host requirements . Important Virtualization must be enabled in your host's BIOS settings. For information on changing your host's BIOS settings, refer to your host's hardware documentation. Important Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM. 4.2.2. Enabling the Red Hat Enterprise Linux host Repositories To use a Red Hat Enterprise Linux machine as a host, you must register the system with the Content Delivery Network, attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions, and enable the host repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and record the pool IDs: Use the pool IDs to attach the subscriptions to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER8 hardware: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER9 hardware: Ensure that all packages currently installed are up to date: Reboot the machine. 4.2.3. Installing Cockpit on Red Hat Enterprise Linux hosts You can install Cockpit for monitoring the host's resources and performing administrative tasks. Procedure Install the dashboard packages: Enable and start the cockpit.socket service: Check if Cockpit is an active service in the firewall: You should see cockpit listed. If it is not, enter the following with root permissions to add cockpit as a service to your firewall: The --permanent option keeps the cockpit service active after rebooting. You can log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . 4.3. Recommended Practices for Configuring Host Networks If your network environment is complex, you may need to configure a host network manually before adding the host to the Red Hat Virtualization Manager. Red Hat recommends the following practices for configuring a host network: Configure the network with Cockpit. Alternatively, you can use nmtui or nmcli . If a network is not required for a self-hosted engine deployment or for adding a host to the Manager, configure the network in the Administration Portal after adding the host to the Manager. See Creating a New Logical Network in a Data Center or Cluster . Use the following naming conventions: VLAN devices: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD VLAN interfaces: physical_device . VLAN_ID (for example, eth0.23 , eth1.128 , enp3s0.50 ) Bond interfaces: bond number (for example, bond0 , bond1 ) VLANs on bond interfaces: bond number . VLAN_ID (for example, bond0.50 , bond1.128 ) Use network bonding . Networking teaming is not supported in Red Hat Virtualization and will cause errors if the host is used to deploy a self-hosted engine or added to the Manager. Use recommended bonding modes: If the ovirtmgmt network is not used by virtual machines, the network may use any supported bonding mode. If the ovirtmgmt network is used by virtual machines, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . Red Hat Virtualization's default bonding mode is (Mode 4) Dynamic Link Aggregation . If your switch does not support Link Aggregation Control Protocol (LACP), use (Mode 1) Active-Backup . See Bonding Modes for details. Configure a VLAN on a physical NIC as in the following example (although nmcli is used, you can use any tool): Configure a VLAN on a bond as in the following example (although nmcli is used, you can use any tool): Do not disable firewalld . Customize the firewall rules in the Administration Portal after adding the host to the Manager. See Configuring Host Firewall Rules . Important When creating a management bridge that uses a static IPv6 address, disable network manager control in its interface configuration (ifcfg) file before adding a host. See https://access.redhat.com/solutions/3981311 for more information. 4.4. Adding Standard Hosts to the Red Hat Virtualization Manager Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge. Important When creating a management bridge that uses a static IPv6 address, disable network manager control in its interface configuration (ifcfg) file before adding a host. See https://access.redhat.com/solutions/3981311 for more information. Procedure From the Administration Portal, click Compute Hosts . Click New . Use the drop-down list to select the Data Center and Host Cluster for the new host. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field. Select an authentication method to use for the Manager to access the host. Enter the root user's password to use password authentication. Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication. Optionally, click the Advanced Parameters button to change the following advanced host settings: Disable automatic firewall configuration. Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide . Click OK . The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the Events section of the Notification Drawer ( ). After a brief delay the host status changes to Up . | [
"subscription-manager repos --enable=rhel-7-server-rhvh-4-rpms",
"rpm -Uvh http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # subscription-manager register --org=\" org_id \" # subscription-manager list --available # subscription-manager attach --pool= pool_id # subscription-manager repos --disable='*' --enable=rhel-7-server-rhvh-4-rpms",
"clearpart --all part /boot --fstype xfs --size=1000 --ondisk=sda part pv.01 --size=42000 --grow volgroup HostVG pv.01 --reserved-percent=20 logvol swap --vgname=HostVG --name=swap --fstype=swap --recommended logvol none --vgname=HostVG --name=HostPool --thinpool --size=40000 --grow logvol / --vgname=HostVG --name=root --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=6000 --grow logvol /var --vgname=HostVG --name=var --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=15000 logvol /var/crash --vgname=HostVG --name=var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=10000 logvol /var/log --vgname=HostVG --name=var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=8000 logvol /var/log/audit --vgname=HostVG --name=var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=2000 logvol /home --vgname=HostVG --name=home --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000 logvol /tmp --vgname=HostVG --name=tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000",
"mount -o loop /path/to/RHVH-ISO /mnt/rhvh cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp cd /tmp rpm2cpio redhat-virtualization-host-image-update* | cpio -idmv",
"cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/",
"LABEL rhvh MENU LABEL Install Red Hat Virtualization Host KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO",
"<%# kind: PXELinux name: RHVH PXELinux %> Created for booting new hosts # DEFAULT rhvh LABEL rhvh KERNEL <%= @kernel %> APPEND initrd=<%= @initrd %> inst.ks=<%= foreman_url(\"provision\") %> inst.stage2=<%= @host.params[\"rhvh_image\"] %> intel_iommu=on console=tty0 console=ttyS1,115200n8 ssh_pwauth=1 local_boot_trigger=<%= foreman_url(\"built\") %> IPAPPEND 2",
"cp -a /mnt/rhvh/ /var/www/html/rhvh-install curl URL/to/RHVH-ISO /rhvh-install",
"liveimg --url= example.com /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img",
"autopart --type=thinp",
"%post nodectl init %end",
"liveimg --url=http:// FQDN /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img clearpart --all autopart --type=thinp rootpw --plaintext ovirt timezone --utc America/Phoenix zerombr text reboot %post --erroronfail nodectl init %end",
"<%# kind: provision name: RHVH Kickstart default oses: - RHVH %> install liveimg --url=<%= @host.params['rhvh_image'] %>squashfs.img network --bootproto static --ip=<%= @host.ip %> --netmask=<%= @host.subnet.mask %> --gateway=<%= @host.subnet.gateway %> --nameserver=<%= @host.subnet.dns_primary %> --hostname <%= @host.name %> zerombr clearpart --all autopart --type=thinp rootpw --iscrypted <%= root_pass %> installation answers lang en_US.UTF-8 timezone <%= @host.params['time-zone'] || 'UTC' %> keyboard us firewall --service=ssh services --enabled=sshd text reboot %post --log=/root/ks.post.log --erroronfail nodectl init <%= snippet 'subscription_manager_registration' %> <%= snippet 'kickstart_networking_setup' %> /usr/sbin/ntpdate -sub <%= @host.params['ntp_server1'] || '0.fedora.pool.ntp.org' %> /usr/sbin/hwclock --systohc /usr/bin/curl <%= foreman_url('built') %> sync systemctl reboot %end",
"APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO inst.ks= URL/to/RHVH-ks .cfg",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= poolid",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms --enable=rhel-7-for-power-le-rpms",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms --enable=rhel-7-for-power-9-rpms",
"yum update",
"yum install cockpit-ovirt-dashboard",
"systemctl enable cockpit.socket systemctl start cockpit.socket",
"firewall-cmd --list-services",
"firewall-cmd --permanent --add-service=cockpit",
"nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ivp4.gateway 123.123 .0.254",
"nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup,miimon=100\" ipv4.method disabled ipv6.method ignore nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ivp4.gateway 123.123 .0.254"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/installing_hosts_for_rhv_sm_remotedb_deploy |
Chapter 10. Managing Ceph OSDs on the dashboard | Chapter 10. Managing Ceph OSDs on the dashboard As a storage administrator, you can monitor and manage OSDs on the Red Hat Ceph Storage Dashboard. Some of the capabilities of the Red Hat Ceph Storage Dashboard are: List OSDs, their status, statistics, information such as attributes, metadata, device health, performance counters and performance details. Mark OSDs down, in, out, lost, purge, reweight, scrub, deep-scrub, destroy, delete, and select profiles to adjust backfilling activity. List all drives associated with an OSD. Set and change the device class of an OSD. Deploy OSDs on new drives and hosts. Prerequisites A running Red Hat Ceph Storage cluster cluster-manager level of access on the Red Hat Ceph Storage dashboard 10.1. Managing the OSDs on the Ceph dashboard You can carry out the following actions on a Ceph OSD on the Red Hat Ceph Storage Dashboard: Create a new OSD. Edit the device class of the OSD. Mark the Flags as No Up , No Down , No In , or No Out . Scrub and deep-scrub the OSDs. Reweight the OSDs. Mark the OSDs Out , In , Down , or Lost . Purge the OSDs. Destroy the OSDs. Delete the OSDs. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts, Monitors and Manager Daemons are added to the storage cluster. Procedure Log in to the Dashboard. From the Cluster drop-down menu, select OSDs . Creating an OSD To create the OSD, click Create . Figure 10.1. Add device for OSDs Note Ensure you have an available host and a few available devices. You can check for available devices in Physical Disks under the Cluster drop-down menu. In the Create OSDs window, from Deployment Options, select one of the below options: Cost/Capacity-optimized : The cluster gets deployed with all available HDDs. Throughput-optimized : Slower devices are used to store data and faster devices are used to store journals/WALs. IOPS-optmized : All the available NVMEs are used to deploy OSDs. From the Advanced Mode, you can add primary, WAL and DB devices by clicking +Add . Primary devices : Primary storage devices contain all OSD data. WAL devices : Write-Ahead-Log devices are used for BlueStore's internal journal and are used only if the WAL device is faster than the primary device. For example, NVMEs or SSDs. DB devices : DB devices are used to store BlueStore's internal metadata and are used only if the DB device is faster than the primary device. For example, NVMEs or SSDs ). If you want to encrypt your data for security purposes, under Features , select encryption . Click the Preview button and in the OSD Creation Preview dialog box, Click Create . In the OSD Creation Preview dialog box, Click Create . You get a notification that the OSD was created successfully. The OSD status changes from in and down to in and up . Editing an OSD To edit an OSD, select the row. From Edit drop-down menu, select Edit . Edit the device class. Click Edit OSD . Figure 10.2. Edit an OSD You get a notification that the OSD was updated successfully. Marking the Flags of OSDs To mark the flag of the OSD, select the row. From Edit drop-down menu, select Flags . Mark the Flags with No Up , No Down , No In , or No Out . Click Update . Figure 10.3. Marking Flags of an OSD You get a notification that the flags of the OSD was updated successfully. Scrubbing the OSDs To scrub the OSD, select the row. From Edit drop-down menu, select Scrub . In the OSDs Scrub dialog box, click Update . Figure 10.4. Scrubbing an OSD You get a notification that the scrubbing of the OSD was initiated successfully. Deep-scrubbing the OSDs To deep-scrub the OSD, select the row. From Edit drop-down menu, select Deep scrub . In the OSDs Deep Scrub dialog box, click Update . Figure 10.5. Deep-scrubbing an OSD You get a notification that the deep scrubbing of the OSD was initiated successfully. Reweighting the OSDs To reweight the OSD, select the row. From Edit drop-down menu, select Reweight . In the Reweight OSD dialog box, enter a value between zero and one. Click Reweight . Figure 10.6. Reweighting an OSD Marking OSDs Out To mark the OSD out, select the row. From Edit drop-down menu, select Mark Out . In the Mark OSD out dialog box, click Mark Out . Figure 10.7. Marking OSDs out The status of the OSD will change to out . Marking OSDs In To mark the OSD in, select the OSD row that is in out status. From Edit drop-down menu, select Mark In . In the Mark OSD in dialog box, click Mark In . Figure 10.8. Marking OSDs in The status of the OSD will change to in . Marking OSDs Down To mark the OSD down, select the row. From Edit drop-down menu, select Mark Down . In the Mark OSD down dialog box, click Mark Down . Figure 10.9. Marking OSDs down The status of the OSD will change to down . Marking OSDs Lost To mark the OSD lost, select the OSD in out and down status. From Edit drop-down menu, select Mark Lost . In the Mark OSD Lost dialog box, check Yes, I am sure option, and click Mark Lost . Figure 10.10. Marking OSDs Lost Purging OSDs To purge the OSD, select the OSD in down status. From Edit drop-down menu, select Purge . In the Purge OSDs dialog box, check Yes, I am sure option, and click Purge OSD . Figure 10.11. Purging OSDs All the flags are reset and the OSD is back in in and up status. Destroying OSDs To destroy the OSD, select the OSD in down status. From Edit drop-down menu, select Destroy . In the Destroy OSDs dialog box, check Yes, I am sure option, and click Destroy OSD . Figure 10.12. Destroying OSDs The status of the OSD changes to destroyed . Deleting OSDs To delete the OSD, select the OSD in down status. From Edit drop-down menu, select Delete . In the Destroy OSDs dialog box, check Yes, I am sure option, and click Delete OSD . Note You can preserve the OSD_ID when you have to to replace the failed OSD. Figure 10.13. Deleting OSDs 10.2. Replacing the failed OSDs on the Ceph dashboard You can replace the failed OSDs in a Red Hat Ceph Storage cluster with the cluster-manager level of access on the dashboard. One of the highlights of this feature on the dashboard is that the OSD IDs can be preserved while replacing the failed OSDs. Prerequisites A running Red Hat Ceph Storage cluster. At least cluster-manager level of access to the Ceph Dashboard. At least one of the OSDs is down Procedure On the dashboard, you can identify the failed OSDs in the following ways: Dashboard AlertManager pop-up notifications. Dashboard landing page showing HEALTH_WARN status. Dashboard landing page showing failed OSDs. Dashboard OSD page showing failed OSDs. In this example, you can see that one of the OSDs is down on the landing page of the dashboard. Apart from this, on the physical drive, you can view the LED lights blinking if one of the OSDs is down. Click OSDs . Select the out and down OSD: From the Edit drop-down menu, select Flags and select No Up and click Update . From the Edit drop-down menu, select Delete . In the Delete OSD dialog box, select the Preserve OSD ID(s) for replacement and Yes, I am sure check boxes. Click Delete OSD . Wait till the status of the OSD changes to out and destroyed status. Optional: If you want to change the No Up Flag for the entire cluster, in the Cluster-wide configuration drop-down menu, select Flags . In Cluster-wide OSDs Flags dialog box, select No Up and click Update. Optional: If the OSDs are down due to a hard disk failure, replace the physical drive: If the drive is hot-swappable, replace the failed drive with a new one. If the drive is not hot-swappable and the host contains multiple OSDs, you might have to shut down the whole host and replace the physical drive. Consider preventing the cluster from backfilling. See the Stopping and Starting Rebalancing chapter in the Red Hat Ceph Storage Troubleshooting Guide for details. When the drive appears under the /dev/ directory, make a note of the drive path. If you want to add the OSD manually, find the OSD drive and format the disk. If the new disk has data, zap the disk: Syntax Example From the Create drop-down menu, select Create . In the Create OSDs window, click +Add for Primary devices. In the Primary devices dialog box, from the Hostname drop-down list, select any one filter. From Any drop-down list, select the respective option. Note You have to select the Hostname first and then at least one filter to add the devices. For example, from Hostname list, select Type and from Any list select hdd . Select Vendor and from Any list, select ATA Click Add . In the Create OSDs window , click the Preview button. In the OSD Creation Preview dialog box, Click Create . You will get a notification that the OSD is created. The OSD will be in out and down status. Select the newly created OSD that has out and down status. In the Edit drop-down menu, select Mark-in . In the Mark OSD in window, select Mark in . In the Edit drop-down menu, select Flags . Uncheck No Up and click Update . Optional: If you have changed the No Up Flag before for cluster-wide configuration, in the Cluster-wide configuration menu, select Flags . In Cluster-wide OSDs Flags dialog box, uncheck No Up and click Update . Verification Verify that the OSD that was destroyed is created on the device and the OSD ID is preserved. Additional Resources For more information on Down OSDs, see the Down OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . For additional assistance see the Red Hat Support for service section in the Red Hat Ceph Storage Troubleshooting Guide . For more information on system roles, see the Managing roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . | [
"ceph orch device zap HOST_NAME PATH --force",
"ceph orch device zap ceph-adm2 /dev/sdc --force"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/dashboard_guide/management-of-ceph-osds-on-the-dashboard |
8.55. glusterfs | 8.55. glusterfs 8.55.1. RHBA-2013:1641 - glusterfs bug fix and enhancement update Updated glusterfs packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. Red Hat Storage is software only, scale-out storage that provides flexible and affordable unstructured data storage for the enterprise. GlusterFS, a key building block of Red Hat Storage, is based on a stackable user-space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over network interconnects into one large, parallel network file system. Bug Fixes BZ# 998778 Previously, the "errno" value was not set correctly during an API failure. Consequently, applications using API could behave unpredictably. With this update, the value is set properly during API failures and the applications work as expected. BZ# 998832 Previously, the glusterfs-api library handled all signals that were sent to applications using glusterfs-api. As a consequence, glusterfs-api interpreted incorrectly all the the signals that were not used by this library. With this update, glusterfs-api no longer handles the signals that it does not use so that such signals are now interpreted properly. BZ# 1017014 Previously, the glfs_fini() function did not return NULL, even if the libgfapi library successfully cleaned up all resources. Consequently, an attempt to use the "qemu-img create" command, which used libgfapi, failed. The underlying source code has been modified so that the function returns NULL when the libgfapi cleanup is successful, and the command now works as expected. Enhancement BZ# 916645 Native Support for GlusterFS in QEMU has been included to glusterfs packages. This support allows native access to GlusterFS volumes using the libgfapi library instead of through a locally mounted FUSE file system. This native approach offers considerable performance improvements. Users of glusterfs are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/glusterfs |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_cross-site_replication/making-open-source-more-inclusive_datagrid |
Chapter 6. Managing applications with MTA | Chapter 6. Managing applications with MTA You can use the Migration Toolkit for Applications (MTA) user interface to perform the following tasks: Add applications Assign application credentials Import a list of applications Download a CSV template for importing application lists Create application migration waves Create Jira issues for migration waves MTA user interface applications have the following attributes: Name (free text) Description (optional, free text) Business service (optional, chosen from a list) Tags (optional, chosen from a list) Owner (optional, chosen from a list) Contributors (optional, chosen from a list) Source code (a path entered by the user) Binary (a path entered by the user) 6.1. Adding a new application You can add a new application to the Application Inventory for subsequent assessment and analysis. Tip Before creating an application, set up business services, check tags and tag categories, and create additions as needed. Prerequisites You are logged in to an MTA server. Procedure In the Migration view, click Application Inventory . Click Create new . Under Basic information , enter the following fields: Name : A unique name for the new application. Description : A short description of the application (optional). Business service : A purpose of the application (optional). Manual tags : Software tags that characterize the application (optional, one or more). Owner : A registered software owner from the drop-down list (optional). Contributors : Contributors from the drop-down list (optional, one or more). Comments : Relevant comments on the application (optional). Click Source Code and enter the following fields: Repository type : Git or Subversion . Source repository : A URL of the repository where the software code is saved. For Subversion: this must be either the URL to the root of the repository or a fully qualified URL which (optionally) includes the branch and nested directory. When fully qualified, the Branch and Root path must be blank. Branch : An application code branch in the repository (optional). For Git: this may be any reference; commit-hash , branch or tag . For Subversion: this may be a fully qualified path to a branch or tag, for example, branches/stable or tags/stable . This must be blank when the Source repository URL includes the branch. Root path : A root path inside the repository for the target application (optional). For Subversion: this must be blank when the Source Repository URL includes the root path. NOTE: If you enter any value in either the Branch or Root path fields, the Source repository field becomes mandatory. Optional: Click Binary and enter the following fields: Group : The Maven group for the application artifact. Artifact : The Maven artifact for the application. Version : A software version of the application. Packaging : The packaging for the application artifact, for example, JAR , WAR , or EAR . NOTE: If you enter any value in any of the Binary section fields, all fields automatically become mandatory. Click Create . The new application appears in the list of defined applications. Automated Tasks After adding a new application to the Application Inventory , you can set your cursor to hover over the application name to see the automated tasks spawned by adding the application. The language discovery task identifies the programming languages in the application. The technology discovery task identifies specific technologies in the application. The tasks automatically add appropriate tags to the application, reducing the effort involved in manually assigning tags to the application. After these tasks are complete, the number of tags added to the application will appear under the Tags column. To view the tags: Click on the application's row entry. A side pane opens. Click the Tags tab. The tags attached to the application are displayed. You can add additional tags manually as needed. When MTA analyzes the application, it can add additional tags to the application automatically. 6.2. Editing an application You can edit an existing application in the Application Inventory and re-run an assessment or analysis for this application. Prerequisites You are logged in to an MTA server. Procedure In the Migration view, click Application Inventory . Select the Migration working mode. Click Application Inventory in the left menu bar. A list of available applications appears in the main pane. Click Edit ( ) to open the application settings. Review the application settings. For a list of application settings, see Adding an application . If you changed any application settings, click Save . Note After editing an application, MTA re-spawns the language discovery and technology discovery tasks. 6.3. Assigning credentials to an application You can assign credentials to one or more applications. Procedure In the Migration view, click Application inventory . Click the Options menu ( ) to the right of Analyze and select Manage credentials . Select one credential from the Source credentials list and from the Maven settings list. Click Save . 6.4. Importing a list of applications You can import a .csv file that contains a list of applications and their attributes to the Migration Toolkit for Applications (MTA) user interface. Note Importing a list of applications does not overwrite any of the existing applications. Procedure Review the import file to ensure it contains all the required information in the required format. In the Migration view, click Application Inventory . Click the Options menu ( ). Click Import . Select the desired file and click Open . Optional: Select Enable automatic creation of missing entities . This option is selected by default. Verify that the import has completed and check the number of accepted or rejected rows. Review the imported applications by clicking the arrow to the left of the checkbox. Important Accepted rows might not match the number of applications in the Application inventory list because some rows are dependencies. To verify, check the Record Type column of the CSV file for applications defined as 1 and dependencies defined as 2 . 6.5. Downloading a CSV template You can download a CSV template for importing application lists by using the Migration Toolkit for Applications (MTA) user interface. Procedure In the Migration view, click Application inventory . Click the Options menu ( ) to the right of Review . Click Manage imports to open the Application imports page. Click the Options menu ( ) to the right of Import . Click Download CSV template . 6.6. Creating a migration wave A migration wave is a group applications that you can migrate on a given schedule. You can track each migration by exporting a list of the wave's applications to the Jira issue management system. This automatically creates a separate Jira issue for each application of the migration wave. Procedure In the Migration view, click Migration waves . Click Create new . The New migration wave window opens. Enter the following information: Name (optional). If the name is not given, you can use the start and end dates to identify migration waves. Potential start date . This date must be later than the current date. Potential end date . This date must be later than the start date. Stakeholders (optional) Stakeholder groups (optional) Click Create . The new migration wave appears in the list of existing migration waves. To assign applications to the migration wave, click the Options menu ( ) to the right of the migration wave and select Manage applications . The Manage applications window opens that displays the list of applications that are not assigned to any other migration wave. Select the checkboxes of the applications that you want to assign to the migration wave. Click Save . Note The owner and the contributors of each application associated with the migration wave are automatically added to the migration wave's list of stakeholders. Optional: To update a migration wave, select Update from the migration wave's Options menu ( ). The Update migration wave window opens. 6.7. Creating Jira issues for a migration wave You can use a migration wave to create Jira issues automatically for each application assigned to the migration wave. A separate Jira issue is created for each application associated with the migration wave. The following fields of each issue are filled in automatically: Title: Migrate <application name> Reporter: Username of the token owner. Description: Created by Konveyor Note You cannot delete an application if it is linked to a Jira ticket or is associated with a migration wave. To unlink the application from the Jira ticket, click the Unlink from Jira icon in the details view of the application or in the details view of a migration wave. Prerequisites You configured Jira connection. For more information, see Creating and configuring a Jira connection . Procedure In the Migration view, click Migration waves . Click the Options menu ( ) to the right of the migration wave for which you want to create Jira issues and select Export to Issue Manager . The Export to Issue Manager window opens. Select the Jira Cloud or Jira Server/Datacenter instance type. Select the instance, project, and issue type from the lists. Click Export . The status of the migration wave on the Migration waves page changes to Issues Created . Optional: To see the status of each individual application of a migration wave, click the Status column. Optional: To see if any particular application is associated with a migration wave, open the application's Details tab on the Application inventory page. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/user_interface_guide/working-with-applications-in-the-ui |
Chapter 4. Controlling pod placement onto nodes (scheduling) | Chapter 4. Controlling pod placement onto nodes (scheduling) 4.1. Controlling pod placement using the scheduler Pod scheduling is an internal process that determines placement of new pods onto nodes within the cluster. The scheduler code has a clean separation that watches new pods as they get created and identifies the most suitable node to host them. It then creates bindings (pod to node bindings) for the pods using the master API. Default pod scheduling OpenShift Container Platform comes with a default scheduler that serves the needs of most users. The default scheduler uses both inherent and customization tools to determine the best fit for a pod. Advanced pod scheduling In situations where you might want more control over where new pods are placed, the OpenShift Container Platform advanced scheduling features allow you to configure a pod so that the pod is required or has a preference to run on a particular node or alongside a specific pod. You can control pod placement by using the following scheduling features: Scheduler profiles Pod affinity and anti-affinity rules Node affinity Node selectors Taints and tolerations Node overcommitment 4.1.1. About the default scheduler The default OpenShift Container Platform pod scheduler is responsible for determining the placement of new pods onto nodes within the cluster. It reads data from the pod and finds a node that is a good fit based on configured profiles. It is completely independent and exists as a standalone solution. It does not modify the pod; it creates a binding for the pod that ties the pod to the particular node. 4.1.1.1. Understanding default scheduling The existing generic scheduler is the default platform-provided scheduler engine that selects a node to host the pod in a three-step operation: Filters the nodes The available nodes are filtered based on the constraints or requirements specified. This is done by running each node through the list of filter functions called predicates , or filters . Prioritizes the filtered list of nodes This is achieved by passing each node through a series of priority , or scoring , functions that assign it a score between 0 - 10, with 0 indicating a bad fit and 10 indicating a good fit to host the pod. The scheduler configuration can also take in a simple weight (positive numeric value) for each scoring function. The node score provided by each scoring function is multiplied by the weight (default weight for most scores is 1) and then combined by adding the scores for each node provided by all the scores. This weight attribute can be used by administrators to give higher importance to some scores. Selects the best fit node The nodes are sorted based on their scores and the node with the highest score is selected to host the pod. If multiple nodes have the same high score, then one of them is selected at random. 4.1.2. Scheduler use cases One of the important use cases for scheduling within OpenShift Container Platform is to support flexible affinity and anti-affinity policies. 4.1.2.1. Infrastructure topological levels Administrators can define multiple topological levels for their infrastructure (nodes) by specifying labels on nodes. For example: region=r1 , zone=z1 , rack=s1 . These label names have no particular meaning and administrators are free to name their infrastructure levels anything, such as city/building/room. Also, administrators can define any number of levels for their infrastructure topology, with three levels usually being adequate (such as: regions zones racks ). Administrators can specify affinity and anti-affinity rules at each of these levels in any combination. 4.1.2.2. Affinity Administrators should be able to configure the scheduler to specify affinity at any topological level, or even at multiple levels. Affinity at a particular level indicates that all pods that belong to the same service are scheduled onto nodes that belong to the same level. This handles any latency requirements of applications by allowing administrators to ensure that peer pods do not end up being too geographically separated. If no node is available within the same affinity group to host the pod, then the pod is not scheduled. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.1.2.3. Anti-affinity Administrators should be able to configure the scheduler to specify anti-affinity at any topological level, or even at multiple levels. Anti-affinity (or 'spread') at a particular level indicates that all pods that belong to the same service are spread across nodes that belong to that level. This ensures that the application is well spread for high availability purposes. The scheduler tries to balance the service pods across all applicable nodes as evenly as possible. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.2. Scheduling pods using a scheduler profile You can configure OpenShift Container Platform to use a scheduling profile to schedule pods onto nodes within the cluster. 4.2.1. About scheduler profiles You can specify a scheduler profile to control how pods are scheduled onto nodes. The following scheduler profiles are available: LowNodeUtilization This profile attempts to spread pods evenly across nodes to get low resource usage per node. This profile provides the default scheduler behavior. HighNodeUtilization This profile attempts to place as many pods as possible on to as few nodes as possible. This minimizes node count and has high resource usage per node. Note Switching to the HighNodeUtilization scheduler profile will result in all pods of a ReplicaSet object being scheduled on the same node. This will add an increased risk for pod failure if the node fails. NoScoring This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plugins. This might sacrifice better scheduling decisions for faster ones. 4.2.2. Configuring a scheduler profile You can configure the scheduler to use a scheduler profile. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the Scheduler object: USD oc edit scheduler cluster Specify the profile to use in the spec.profile field: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: mastersSchedulable: false profile: HighNodeUtilization 1 #... 1 Set to LowNodeUtilization , HighNodeUtilization , or NoScoring . Save the file to apply the changes. 4.3. Placing pods relative to other pods using affinity and anti-affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. 4.3.1. Understanding pod affinity Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods. Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod. Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod. For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodes, availability zones, or availability sets to reduce correlated failures. Note A label selector might match pods with multiple pod deployments. Use unique combinations of labels when configuring anti-affinity rules to avoid matching pods. There are two types of pod affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note Depending on your pod priority and preemption settings, the scheduler might not be able to find an appropriate node for a pod without violating affinity requirements. If so, a pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. You configure pod affinity/anti-affinity through the Pod spec files. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example shows a Pod spec configured for pod affinity and anti-affinity. In this example, the pod affinity rule indicates that the pod can schedule onto a node only if that node has at least one already-running pod with a label that has the key security and value S1 . The pod anti-affinity rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label having key security and value S2 . Sample Pod config file with pod affinity apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Stanza to configure pod affinity. 2 Defines a required rule. 3 5 The key and value (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Sample Pod config file with pod anti-affinity apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Note If labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod continues to run on the node. 4.3.2. Configuring a pod affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses affinity to allow scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters to add the affinity: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1-east # ... spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5 # ... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 5 Specify a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.3. Configuring a pod anti-affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses an anti-affinity preferred rule to attempt to prevent scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s2-east # ... spec: # ... affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6 # ... 1 Adds a pod anti-affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 For a preferred rule, specifies a weight for the node, 1-100. The node that with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to not be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 6 Specifies a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.4. Sample pod affinity and anti-affinity rules The following examples demonstrate pod affinity and pod anti-affinity. 4.3.4.1. Pod Affinity The following example demonstrates pod affinity for pods with matching labels and label selectors. The pod team4 has the label team:4 . apiVersion: v1 kind: Pod metadata: name: team4 labels: team: "4" # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod team4a has the label selector team:4 under podAffinity . apiVersion: v1 kind: Pod metadata: name: team4a # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - "4" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The team4a pod is scheduled on the same node as the team4 pod. 4.3.4.2. Pod Anti-affinity The following example demonstrates pod anti-affinity for pods with matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 has the label selector security:s1 under podAntiAffinity . apiVersion: v1 kind: Pod metadata: name: pod-s2 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 cannot be scheduled on the same node as pod-s1 . 4.3.4.3. Pod Affinity with no Matching Labels The following example demonstrates pod affinity for pods without matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 has the label selector security:s2 . apiVersion: v1 kind: Pod metadata: name: pod-s2 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 is not scheduled unless there is a node with a pod that has the security:s2 label. If there is no other pod with that label, the new pod remains in a pending state: Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none> 4.3.5. Using pod affinity and anti-affinity to control where an Operator is installed By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes. The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes: If an Operator requires a particular platform, such as amd64 or arm64 If an Operator requires a particular operating system, such as Linux or Windows If you want Operators that work together scheduled on the same host or on hosts located on the same rack If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues You can control where an Operator pod is installed by adding a pod affinity or anti-affinity to the Operator's Subscription object. The following example shows how to use pod anti-affinity to prevent the installation the Custom Metrics Autoscaler Operator from any node that has pods with a specific label: Pod affinity example that places the Operator pod on one or more specific nodes apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #... 1 A pod affinity that places the Operator's pod on a node that has pods with the app=test label. Pod anti-affinity example that prevents the Operator pod from one or more specific nodes apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #... 1 A pod anti-affinity that prevents the Operator's pod from being scheduled on a node that has pods with the cpu=high label. Procedure To control the placement of an Operator pod, complete the following steps: Install the Operator as usual. If needed, ensure that your nodes are labeled to properly respond to the affinity. Edit the Operator Subscription object to add an affinity: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #... 1 Add a podAffinity or podAntiAffinity . Verification To ensure that the pod is deployed on the specific node, run the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none> 4.4. Controlling pod placement on nodes using node affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. In OpenShift Container Platform node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on the nodes and label selectors specified in pods. 4.4.1. Understanding node affinity Node affinity allows a pod to specify an affinity towards a group of nodes it can be placed on. The node does not have control over the placement. For example, you could configure a pod to only run on a node with a specific CPU or in a specific availability zone. There are two types of node affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note If labels on a node change at runtime that results in an node affinity rule on a pod no longer being met, the pod continues to run on the node. You configure node affinity through the Pod spec file. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example is a Pod spec with a rule that requires the pod be placed on a node with a label whose key is e2e-az-NorthSouth and whose value is either e2e-az-North or e2e-az-South : Example pod configuration file with a node affinity required rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... 1 The stanza to configure node affinity. 2 Defines a required rule. 3 5 6 The key/value pair (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . The following example is a node specification with a preferred rule that a node with a label whose key is e2e-az-EastWest and whose value is either e2e-az-East or e2e-az-West is preferred for the pod: Example pod configuration file with a node affinity preferred rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... 1 The stanza to configure node affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with highest weight is preferred. 4 6 7 The key/value pair (label) that must be matched to apply the rule. 5 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . There is no explicit node anti-affinity concept, but using the NotIn or DoesNotExist operator replicates that behavior. Note If you are using node affinity and node selectors in the same pod configuration, note the following: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. 4.4.2. Configuring a required node affinity rule Required rules must be met before a pod can be scheduled on a node. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler is required to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az1 Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #... Create a pod with a specific label in the pod spec: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. Example output apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod: USD oc create -f <file-name>.yaml 4.4.3. Configuring a preferred node affinity rule Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler tries to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az3 Create a pod with a specific label: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #... 1 Adds a pod affinity. 2 Configures the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies a weight for the node, as a number 1-100. The node with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod. USD oc create -f <file-name>.yaml 4.4.4. Sample node affinity rules The following examples demonstrate node affinity. 4.4.4.1. Node affinity with matching labels The following example demonstrates node affinity for a node and pod with matching labels: The Node1 node has the label zone:us : USD oc label node node1 zone=us Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod can be scheduled on Node1: USD oc get pod -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1 4.4.4.2. Node affinity with no matching labels The following example demonstrates node affinity for a node and pod without matching labels: The Node1 node has the label zone:emea : USD oc label node node1 zone=emea Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod cannot be scheduled on Node1: USD oc describe pod pod-s1 Example output ... Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1). 4.4.5. Using node affinity to control where an Operator is installed By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes. The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes: If an Operator requires a particular platform, such as amd64 or arm64 If an Operator requires a particular operating system, such as Linux or Windows If you want Operators that work together scheduled on the same host or on hosts located on the same rack If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues You can control where an Operator pod is installed by adding a node affinity constraints to the Operator's Subscription object. The following examples show how to use node affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster: Node affinity example that places the Operator pod on a specific node apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #... 1 A node affinity that requires the Operator's pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal . Node affinity example that places the Operator pod on a node with a specific platform apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #... 1 A node affinity that requires the Operator's pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels. Procedure To control the placement of an Operator pod, complete the following steps: Install the Operator as usual. If needed, ensure that your nodes are labeled to properly respond to the affinity. Edit the Operator Subscription object to add an affinity: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #... 1 Add a nodeAffinity . Verification To ensure that the pod is deployed on the specific node, run the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none> 4.4.6. Additional resources Understanding how to update labels on nodes 4.5. Placing pods onto overcommited nodes In an overcommited state, the sum of the container compute resource requests and limits exceeds the resources available on the system. Overcommitment might be desirable in development environments where a trade-off of guaranteed performance for capacity is acceptable. Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. 4.5.1. Understanding overcommitment Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes by configuring masters to override the ratio between request and limit set on developer containers. In conjunction with a per-project LimitRange object specifying limits and defaults, this adjusts the container limit and request to achieve the desired level of overcommit. Note That these overrides have no effect if no limits have been set on containers. Create a LimitRange object with default limits, per individual project, or in the project template, to ensure that the overrides apply. After these overrides, the container limits and requests must still be validated by any LimitRange object in the project. It is possible, for example, for developers to specify a limit close to the minimum limit, and have the request then be overridden below the minimum limit, causing the pod to be forbidden. This unfortunate user experience should be addressed with future work, but for now, configure this capability and LimitRange objects with caution. 4.5.2. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority. You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 4.6. Controlling pod placement using node taints Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. 4.6.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint to a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 4.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 4.6.1.1. Understanding how to use toleration seconds to delay pod evictions You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds parameter in the Pod specification or MachineSet object. If a taint with the NoExecute effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds parameter, the pod is not evicted until that time period expires. Example output apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. 4.6.1.2. Understanding how to use multiple taints You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows: Process the taints for which the pod has a matching toleration. The remaining unmatched taints have the indicated effects on the pod: If there is at least one unmatched taint with effect NoSchedule , OpenShift Container Platform cannot schedule a pod onto that node. If there is no unmatched taint with effect NoSchedule but there is at least one unmatched taint with effect PreferNoSchedule , OpenShift Container Platform tries to not schedule the pod onto the node. If there is at least one unmatched taint with effect NoExecute , OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node. Pods that do not tolerate the taint are evicted immediately. Pods that tolerate the taint without specifying tolerationSeconds in their Pod specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. For example: Add the following taints to the node: USD oc adm taint nodes node1 key1=value1:NoSchedule USD oc adm taint nodes node1 key1=value1:NoExecute USD oc adm taint nodes node1 key2=value2:NoSchedule The pod has the following tolerations: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" #... In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. 4.6.1.3. Understanding pod scheduling and node conditions (taint node by condition) The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons: node.kubernetes.io/memory-pressure node.kubernetes.io/disk-pressure node.kubernetes.io/unschedulable (1.10 or later) node.kubernetes.io/network-unavailable (host network only) You can also add arbitrary tolerations to daemon sets. Note The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class. This is because Kubernetes manages pods in the Guaranteed or Burstable QoS classes. The new BestEffort pods do not get scheduled onto the affected node. 4.6.1.4. Understanding evicting pods by condition (taint-based evictions) The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready and unreachable . When a node experiences one of these conditions, OpenShift Container Platform automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. Taint Based Evictions have a NoExecute effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds parameter. The tolerationSeconds parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds period, pods with matching tolerations are not removed. If you use the tolerationSeconds parameter with no value, pods are never evicted because of the not ready and unreachable node conditions. Note OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone's state to PartialDisruption and the rate of pod evictions is reduced. For small clusters (by default, 50 nodes or less) in this state, nodes in this zone are not tainted and evictions are stopped. For more information, see Rate limits on eviction in the Kubernetes documentation. OpenShift Container Platform automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , unless the Pod configuration specifies either toleration. apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #... 1 These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. Pods spawned by a daemon set are created with NoExecute tolerations for the following taints with no tolerationSeconds : node.kubernetes.io/unreachable node.kubernetes.io/not-ready As a result, daemon set pods are never evicted because of these node conditions. 4.6.1.5. Tolerating all taints You can configure a pod to tolerate all taints by adding an operator: "Exists" toleration with no key and values parameters. Pods with this toleration are not removed from a node that has taints. Pod spec for tolerating all taints apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - operator: "Exists" #... 4.6.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 #... 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 4.6.2.1. Adding taints and tolerations using a compute machine set You can add taints to nodes using a compute machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a compute machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a compute machine set specification apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset #... spec: #... template: #... spec: taints: - effect: NoExecute key: key1 value: value1 #... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the compute machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the compute machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 4.6.2.2. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my-node #... spec: taints: - key: dedicated value: groupName effect: NoSchedule #... Add a toleration to the pods by writing a custom admission controller. 4.6.2.3. Creating a project with a node selector and toleration You can create a project that uses a node selector and toleration, which are set as annotations, to control the placement of pods onto specific nodes. Any subsequent resources created in the project are then scheduled on nodes that have a taint matching the toleration. Prerequisites A label for node selection has been added to one or more nodes by using a compute machine set or editing the node directly. A taint has been added to one or more nodes by using a compute machine set or editing the node directly. Procedure Create a Project resource definition, specifying a node selector and toleration in the metadata.annotations section: Example project.yaml file kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "<key_name>"} 3 ] 1 The project name. 2 The default node selector label. 3 The toleration parameters, as described in the Taint and toleration components table. This example uses the NoSchedule effect, which allows existing pods on the node to remain, and the Exists operator, which does not take a value. Use the oc apply command to create the project: USD oc apply -f project.yaml Any subsequent resources created in the <project_name> namespace should now be scheduled on the specified nodes. Additional resources Adding taints and tolerations manually to nodes or with compute machine sets Creating project-wide node selectors Pod placement of Operator workloads 4.6.2.4. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 #... Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my_node #... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #... 4.6.3. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... 4.7. Placing pods on specific nodes using node selectors A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 4.7.1. About node selectors You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes. For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east , us-central , or us-west . In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west . The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes. A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label. Important If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. Node selectors on specific pods and nodes You can control which node a specific pod is scheduled on by using node selectors and labels. To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod. Note You cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config. For example, the following Node object has the region: east label: Sample Node object with a label kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #... 1 Labels to match the pod node selector. A pod has the type: user-node,region: east node selector: Sample Pod object with node selectors apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: 1 region: east type: user-node #... 1 Node selectors to match the node label. The node must have a label for each node selector. When you create the pod using the example pod spec, it can be scheduled on the example node. Default cluster-wide node selectors With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. For example, the following Scheduler object has the default cluster-wide region=east and type=user-node node selectors: Example Scheduler Operator Custom Resource apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: defaultNodeSelector: type=user-node,region=east #... A node in that cluster has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: region: east #... When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node: Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> Note If the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector. Project node selectors With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference. For example, the following project has the region=east node selector: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: "region=east" #... The following node has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node: Example Pod object apiVersion: v1 kind: Pod metadata: namespace: east-region #... spec: nodeSelector: region: east type: user-node #... Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not created: Example Pod object with an invalid node selector apiVersion: v1 kind: Pod metadata: name: west-region #... spec: nodeSelector: region: west #... 4.7.2. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.30.3 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 4.7.3. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a compute machine set or editing the node directly: Use a compute machine set to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that compute machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3 4.7.4. Creating project-wide node selectors You can use node selectors in a project together with labels on nodes to constrain all pods created in that project to the labeled nodes. When you create a pod in this project, OpenShift Container Platform adds the node selectors to the pods in the project and schedules the pods on a node with matching labels in the project. If there is a cluster-wide default node selector, a project node selector takes preference. You add node selectors to a project by editing the Namespace object to add the openshift.io/node-selector parameter. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. A pod is not scheduled if the Pod object contains a node selector, but no project has a matching node selector. When you create a pod from that spec, you receive an error similar to the following message: Example error message Error from server (Forbidden): error when creating "pod.yaml": pods "pod-4" is forbidden: pod node label selector conflicts with its project node label selector Note You can add additional key/value pairs to a pod. But you cannot add a different value for a project key. Procedure To add a default project node selector: Create a namespace or edit an existing namespace to add the openshift.io/node-selector parameter: USD oc edit namespace <name> Example output apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "type=user-node,region=east" 1 openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: "2021-05-10T12:35:04Z" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: "145537" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes 1 Add the openshift.io/node-selector with the appropriate <key>:<value> pairs. Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node Redeploy the nodes associated with that compute machine set: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3 Add labels directly to a node: Edit the Node object to add labels: USD oc label <resource> <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the Node object using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3 Additional resources Creating a project with a node selector and toleration 4.8. Controlling pod placement by using pod topology spread constraints You can use pod topology spread constraints to provide fine-grained control over the placement of your pods across nodes, zones, regions, or other user-defined topology domains. Distributing pods across failure domains can help to achieve high availability and more efficient resource utilization. 4.8.1. Example use cases As an administrator, I want my workload to automatically scale between two to fifteen pods. I want to ensure that when there are only two pods, they are not placed on the same node, to avoid a single point of failure. As an administrator, I want to distribute my pods evenly across multiple infrastructure zones to reduce latency and network costs. I want to ensure that my cluster can self-heal if issues arise. 4.8.2. Important considerations Pods in an OpenShift Container Platform cluster are managed by workload controllers such as deployments, stateful sets, or daemon sets. These controllers define the desired state for a group of pods, including how they are distributed and scaled across the nodes in the cluster. You should set the same pod topology spread constraints on all pods in a group to avoid confusion. When using a workload controller, such as a deployment, the pod template typically handles this for you. Mixing different pod topology spread constraints can make OpenShift Container Platform behavior confusing and troubleshooting more difficult. You can avoid this by ensuring that all nodes in a topology domain are consistently labeled. OpenShift Container Platform automatically populates well-known labels, such as kubernetes.io/hostname . This helps avoid the need for manual labeling of nodes. These labels provide essential topology information, ensuring consistent node labeling across the cluster. Only pods within the same namespace are matched and grouped together when spreading due to a constraint. You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed. 4.8.3. Understanding skew and maxSkew Skew refers to the difference in the number of pods that match a specified label selector across different topology domains, such as zones or nodes. The skew is calculated for each domain by taking the absolute difference between the number of pods in that domain and the number of pods in the domain with the lowest amount of pods scheduled. Setting a maxSkew value guides the scheduler to maintain a balanced pod distribution. 4.8.3.1. Example skew calculation You have three zones (A, B, and C), and you want to distribute your pods evenly across these zones. If zone A has 5 pods, zone B has 3 pods, and zone C has 2 pods, to find the skew, you can subtract the number of pods in the domain with the lowest amount of pods scheduled from the number of pods currently in each zone. This means that the skew for zone A is 3, the skew for zone B is 1, and the skew for zone C is 0. 4.8.3.2. The maxSkew parameter The maxSkew parameter defines the maximum allowable difference, or skew, in the number of pods between any two topology domains. If maxSkew is set to 1 , the number of pods in any topology domain should not differ by more than 1 from any other domain. If the skew exceeds maxSkew , the scheduler attempts to place new pods in a way that reduces the skew, adhering to the constraints. Using the example skew calculation, the skew values exceed the default maxSkew value of 1 . The scheduler places new pods in zone B and zone C to reduce the skew and achieve a more balanced distribution, ensuring that no topology domain exceeds the skew of 1. 4.8.4. Example configurations for pod topology spread constraints You can specify which pods to group together, which topology domains they are spread among, and the acceptable skew. The following examples demonstrate pod topology spread constraint configurations. Example to distribute pods that match the specified labels based on their zone apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 The maximum difference in number of pods between any two topology domains. The default is 1 , and you cannot specify a value of 0 . 2 The key of a node label. Nodes with this key and identical value are considered to be in the same topology. 3 How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule , which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced. 4 Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched. 5 Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future. 6 A list of pod label keys to select which pods to calculate spreading over. Example demonstrating a single pod topology spread constraint kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] The example defines a Pod spec with a one pod topology spread constraint. It matches on pods labeled region: us-east , distributes among zones, specifies a skew of 1 , and does not schedule the pod if it does not meet these requirements. Example demonstrating multiple pod topology spread constraints kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] The example defines a Pod spec with two pod topology spread constraints. Both match on pods labeled region: us-east , specify a skew of 1 , and do not schedule the pod if it does not meet these requirements. The first constraint distributes pods based on a user-defined label node , and the second constraint distributes pods based on a user-defined label rack . Both constraints must be met for the pod to be scheduled. 4.8.5. Additional resources Understanding how to update labels on nodes 4.9. Descheduler 4.9.1. Descheduler overview While the scheduler is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node. 4.9.1.1. About the descheduler You can use the descheduler to evict pods based on specific strategies so that the pods can be rescheduled onto more appropriate nodes. You can benefit from descheduling running pods in situations such as the following: Nodes are underutilized or overutilized. Pod and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes. Node failure requires pods to be moved. New nodes are added to clusters. Pods have been restarted too many times. Important The descheduler does not schedule replacement of evicted pods. The scheduler automatically performs this task for the evicted pods. When the descheduler decides to evict pods from a node, it employs the following general mechanism: Pods in the openshift-* and kube-system namespaces are never evicted. Critical pods with priorityClassName set to system-cluster-critical or system-node-critical are never evicted. Static, mirrored, or stand-alone pods that are not part of a replication controller, replica set, deployment, or job are never evicted because these pods will not be recreated. Pods associated with daemon sets are never evicted. Pods with local storage are never evicted. Best effort pods are evicted before burstable and guaranteed pods. All types of pods with the descheduler.alpha.kubernetes.io/evict annotation are eligible for eviction. This annotation is used to override checks that prevent eviction, and the user can select which pod is evicted. Users should know how and if the pod will be recreated. Pods subject to pod disruption budget (PDB) are not evicted if descheduling violates its pod disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB. 4.9.1.2. Descheduler profiles The following descheduler profiles are available: AffinityAndTaints This profile evicts pods that violate inter-pod anti-affinity, node affinity, and node taints. It enables the following strategies: RemovePodsViolatingInterPodAntiAffinity : removes pods that are violating inter-pod anti-affinity. RemovePodsViolatingNodeAffinity : removes pods that are violating node affinity. RemovePodsViolatingNodeTaints : removes pods that are violating NoSchedule taints on nodes. Pods with a node affinity type of requiredDuringSchedulingIgnoredDuringExecution are removed. TopologyAndDuplicates This profile evicts pods in an effort to evenly spread similar pods, or pods of the same topology domain, among nodes. It enables the following strategies: RemovePodsViolatingTopologySpreadConstraint : finds unbalanced topology domains and tries to evict pods from larger ones when DoNotSchedule constraints are violated. RemoveDuplicates : ensures that there is only one pod associated with a replica set, replication controller, deployment, or job running on same node. If there are more, those duplicate pods are evicted for better pod distribution in a cluster. Warning Do not enable TopologyAndDuplicates with any of the following profiles: SoftTopologyAndDuplicates or CompactAndScale . Enabling these profiles together results in a conflict. LifecycleAndUtilization This profile evicts long-running pods and balances resource usage between nodes. It enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times. Pods where the sum of restarts over all containers (including Init Containers) is more than 100. LowNodeUtilization : finds nodes that are underutilized and evicts pods, if possible, from overutilized nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). Optionally, you can adjust these underutilized/overutilized threshold percentages by setting the Technology Preview field devLowNodeUtilizationThresholds to one the following values: Low for 10%/30%, Medium for 20%/50%, or High for 40%/70%. The default value is Medium . PodLifeTime : evicts pods that are too old. By default, pods that are older than 24 hours are removed. You can customize the pod lifetime value. Warning Do not enable LifecycleAndUtilization with any of the following profiles: LongLifecycle or CompactAndScale . Enabling these profiles together results in a conflict. SoftTopologyAndDuplicates This profile is the same as TopologyAndDuplicates , except that pods with soft topology constraints, such as whenUnsatisfiable: ScheduleAnyway , are also considered for eviction. Warning Do not enable both SoftTopologyAndDuplicates and TopologyAndDuplicates . Enabling both results in a conflict. EvictPodsWithLocalStorage This profile allows pods with local storage to be eligible for eviction. EvictPodsWithPVC This profile allows pods with persistent volume claims to be eligible for eviction. If you are using Kubernetes NFS Subdir External Provisioner , you must add an excluded namespace for the namespace where the provisioner is installed. CompactAndScale This profile enables the HighNodeUtilization strategy, which attempts to evict pods from underutilized nodes to allow a workload to run on a smaller set of nodes. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). Optionally, you can adjust the underutilized percentage by setting the Technology Preview field devHighNodeUtilizationThresholds to one the following values: Minimal for 10%, Modest for 20%, or Moderate for 30%. The default value is Modest . Warning Do not enable CompactAndScale with any of the following profiles: LifecycleAndUtilization , LongLifecycle , or TopologyAndDuplicates . Enabling these profiles together results in a conflict. LongLifecycle This profile balances resource usage between nodes and enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization : evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). Warning Do not enable LongLifecycle with any of the following profiles: LifecycleAndUtilization or CompactAndScale . Enabling these profiles together results in a conflict. 4.9.2. Kube Descheduler Operator release notes The Kube Descheduler Operator allows you to evict pods so that they can be rescheduled on more appropriate nodes. These release notes track the development of the Kube Descheduler Operator. For more information, see About the descheduler . 4.9.2.1. Release notes for Kube Descheduler Operator 5.1.1 Issued: 2 December 2024 The following advisory is available for the Kube Descheduler Operator 5.1.1: RHEA-2024:10118 4.9.2.1.1. New features and enhancements This release of the Kube Descheduler Operator updates the Kubernetes version to 1.31. 4.9.2.1.2. Bug fixes This release of the Kube Descheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.9.2.2. Release notes for Kube Descheduler Operator 5.1.0 Issued: 23 October 2024 The following advisory is available for the Kube Descheduler Operator 5.1.0: RHSA-2024:6341 4.9.2.2.1. New features and enhancements Two new descheduler profiles are now available: CompactAndScale : This profile attempts to evict pods from underutilized nodes to allow a workload to run on a smaller set of nodes. LongLifecycle : This profile balances resource usage between nodes and enables the RemovePodsHavingTooManyRestarts and LowNodeUtilization strategies. For the CompactAndScale profile, you can use the Technology Preview field devHighNodeUtilizationThresholds to adjust the underutilized threshold value. 4.9.2.2.2. Bug fixes This release of the Kube Descheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.9.3. Evicting pods using the descheduler You can run the descheduler in OpenShift Container Platform by installing the Kube Descheduler Operator and setting the desired profiles and other customizations. 4.9.3.1. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . 4.9.3.2. Configuring descheduler profiles You can configure which profiles the descheduler uses to evict pods. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Specify one or more profiles in the spec.profiles section. apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 evictionLimits: total: 20 5 profiles: 6 - AffinityAndTaints - TopologyAndDuplicates - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC 1 Optional: By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . 2 Optional: Set a list of user-created namespaces to include or exclude from descheduler operations. Use excluded to set a list of namespaces to exclude or use included to set a list of namespaces to include. Note that protected namespaces ( openshift-* , kube-system , hypershift ) are excluded by default. 3 Optional: Enable a custom pod lifetime value for the LifecycleAndUtilization profile. Valid units are s , m , or h . The default pod lifetime is 24 hours. 4 Optional: Specify a priority threshold to consider pods for eviction only if their priority is lower than the specified level. Use the thresholdPriority field to set a numerical priority threshold (for example, 10000 ) or use the thresholdPriorityClassName field to specify a certain priority class name (for example, my-priority-class-name ). If you specify a priority class name, it must already exist or the descheduler will throw an error. Do not set both thresholdPriority and thresholdPriorityClassName . 5 Optional: Set the maximum number of pods to evict during each descheduler run. 6 Add one or more profiles to enable. Available profiles: AffinityAndTaints , TopologyAndDuplicates , LifecycleAndUtilization , SoftTopologyAndDuplicates , EvictPodsWithLocalStorage , EvictPodsWithPVC , CompactAndScale , and LongLifecycle . Ensure that you do not enable profiles that conflict with each other. You can enable multiple profiles; the order that the profiles are specified in is not important. Save the file to apply the changes. 4.9.3.3. Configuring the descheduler interval You can configure the amount of time between descheduler runs. The default is 3600 seconds (one hour). Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Update the deschedulingIntervalSeconds field to the desired value: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1 ... 1 Set the number of seconds between descheduler runs. A value of 0 in this field runs the descheduler once and exits. Save the file to apply the changes. 4.9.4. Uninstalling the Kube Descheduler Operator You can remove the Kube Descheduler Operator from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 4.9.4.1. Uninstalling the descheduler You can remove the descheduler from your cluster by removing the descheduler instance and uninstalling the Kube Descheduler Operator. This procedure also cleans up the KubeDescheduler CRD and openshift-kube-descheduler-operator namespace. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Delete the descheduler instance. From the Operators Installed Operators page, click Kube Descheduler Operator . Select the Kube Descheduler tab. Click the Options menu to the cluster entry and select Delete KubeDescheduler . In the confirmation dialog, click Delete . Uninstall the Kube Descheduler Operator. Navigate to Operators Installed Operators . Click the Options menu to the Kube Descheduler Operator entry and select Uninstall Operator . In the confirmation dialog, click Uninstall . Delete the openshift-kube-descheduler-operator namespace. Navigate to Administration Namespaces . Enter openshift-kube-descheduler-operator into the filter box. Click the Options menu to the openshift-kube-descheduler-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-kube-descheduler-operator and click Delete . Delete the KubeDescheduler CRD. Navigate to Administration Custom Resource Definitions . Enter KubeDescheduler into the filter box. Click the Options menu to the KubeDescheduler entry and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . 4.10. Secondary scheduler 4.10.1. Secondary scheduler overview You can install the Secondary Scheduler Operator to run a custom secondary scheduler alongside the default scheduler to schedule pods. 4.10.1.1. About the Secondary Scheduler Operator The Secondary Scheduler Operator for Red Hat OpenShift provides a way to deploy a custom secondary scheduler in OpenShift Container Platform. The secondary scheduler runs alongside the default scheduler to schedule pods. Pod configurations can specify which scheduler to use. The custom scheduler must have the /bin/kube-scheduler binary and be based on the Kubernetes scheduling framework . Important You can use the Secondary Scheduler Operator to deploy a custom secondary scheduler in OpenShift Container Platform, but Red Hat does not directly support the functionality of the custom secondary scheduler. The Secondary Scheduler Operator creates the default roles and role bindings required by the secondary scheduler. You can specify which scheduling plugins to enable or disable by configuring the KubeSchedulerConfiguration resource for the secondary scheduler. 4.10.2. Secondary Scheduler Operator for Red Hat OpenShift release notes The Secondary Scheduler Operator for Red Hat OpenShift allows you to deploy a custom secondary scheduler in your OpenShift Container Platform cluster. These release notes track the development of the Secondary Scheduler Operator for Red Hat OpenShift. For more information, see About the Secondary Scheduler Operator . 4.10.2.1. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.3.1 Issued: 3 October 2024 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.3.1: RHEA-2024:7073 4.10.2.1.1. New features and enhancements This release of the Secondary Scheduler Operator adds support for IBM Z(R) and IBM Power(R). 4.10.2.1.2. Bug fixes This release of the Secondary Scheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.10.2.1.3. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.2.2. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.3.0 Issued: 1 July 2024 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.3.0: RHSA-2024:3637 4.10.2.2.1. New features and enhancements You can now install and use the Secondary Scheduler Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 4.10.2.2.2. Bug fixes This release of the Secondary Scheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.10.2.2.3. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.3. Scheduling pods using a secondary scheduler You can run a custom secondary scheduler in OpenShift Container Platform by installing the Secondary Scheduler Operator, deploying the secondary scheduler, and setting the secondary scheduler in the pod definition. 4.10.3.1. Installing the Secondary Scheduler Operator You can use the web console to install the Secondary Scheduler Operator for Red Hat OpenShift. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-secondary-scheduler-operator in the Name field and click Create . Install the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Operators OperatorHub . Enter Secondary Scheduler Operator for Red Hat OpenShift into the filter box. Select the Secondary Scheduler Operator for Red Hat OpenShift and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Secondary Scheduler Operator for Red Hat OpenShift. Select A specific namespace on the cluster and select openshift-secondary-scheduler-operator from the drop-down menu. Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Navigate to Operators Installed Operators . Verify that Secondary Scheduler Operator for Red Hat OpenShift is listed with a Status of Succeeded . 4.10.3.2. Deploying a secondary scheduler After you have installed the Secondary Scheduler Operator, you can deploy a secondary scheduler. Prerequisities You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Create config map to hold the configuration for the secondary scheduler. Navigate to Workloads ConfigMaps . Click Create ConfigMap . In the YAML editor, enter the config map definition that contains the necessary KubeSchedulerConfiguration configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: "secondary-scheduler-config" 1 namespace: "openshift-secondary-scheduler-operator" 2 data: "config.yaml": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated 1 The name of the config map. This is used in the Scheduler Config field when creating the SecondaryScheduler CR. 2 The config map must be created in the openshift-secondary-scheduler-operator namespace. 3 The KubeSchedulerConfiguration resource for the secondary scheduler. For more information, see KubeSchedulerConfiguration in the Kubernetes API documentation. 4 The name of the secondary scheduler. Pods that set their spec.schedulerName field to this value are scheduled with this secondary scheduler. 5 The plugins to enable or disable for the secondary scheduler. For a list default scheduling plugins, see Scheduling plugins in the Kubernetes documentation. Click Create . Create the SecondaryScheduler CR: Navigate to Operators Installed Operators . Select Secondary Scheduler Operator for Red Hat OpenShift . Select the Secondary Scheduler tab and click Create SecondaryScheduler . The Name field defaults to cluster ; do not change this name. The Scheduler Config field defaults to secondary-scheduler-config . Ensure that this value matches the name of the config map created earlier in this procedure. In the Scheduler Image field, enter the image name for your custom scheduler. Important Red Hat does not directly support the functionality of your custom secondary scheduler. Click Create . 4.10.3.3. Scheduling a pod using the secondary scheduler To schedule a pod using the secondary scheduler, set the schedulerName field in the pod definition. Prerequisities You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. A secondary scheduler is configured. Procedure Log in to the OpenShift Container Platform web console. Navigate to Workloads Pods . Click Create Pod . In the YAML editor, enter the desired pod configuration and add the schedulerName field: apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1 1 The schedulerName field must match the name that is defined in the config map when you configured the secondary scheduler. Click Create . Verification Log in to the OpenShift CLI. Describe the pod using the following command: USD oc describe pod nginx -n default Example output Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp ... In the events table, find the event with a message similar to Successfully assigned <namespace>/<pod_name> to <node_name> . In the "From" column, verify that the event was generated from the secondary scheduler and not the default scheduler. Note You can also check the secondary-scheduler-* pod logs in the openshift-secondary-scheduler-namespace to verify that the pod was scheduled by the secondary scheduler. 4.10.4. Uninstalling the Secondary Scheduler Operator You can remove the Secondary Scheduler Operator for Red Hat OpenShift from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 4.10.4.1. Uninstalling the Secondary Scheduler Operator You can uninstall the Secondary Scheduler Operator for Red Hat OpenShift by using the web console. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the Secondary Scheduler Operator for Red Hat OpenShift Operator. Navigate to Operators Installed Operators . Click the Options menu to the Secondary Scheduler Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 4.10.4.2. Removing Secondary Scheduler Operator resources Optionally, after uninstalling the Secondary Scheduler Operator for Red Hat OpenShift, you can remove its related resources from your cluster. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were installed by the Secondary Scheduler Operator: Navigate to Administration CustomResourceDefinitions . Enter SecondaryScheduler in the Name field to filter the CRDs. Click the Options menu to the SecondaryScheduler CRD and select Delete Custom Resource Definition : Remove the openshift-secondary-scheduler-operator namespace. Navigate to Administration Namespaces . Click the Options menu to the openshift-secondary-scheduler-operator and select Delete Namespace . In the confirmation dialog, enter openshift-secondary-scheduler-operator in the field and click Delete . | [
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #",
"apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc label node node1 e2e-az-name=e2e-az1",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #",
"oc create -f <file-name>.yaml",
"oc label node node1 e2e-az-name=e2e-az3",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #",
"oc create -f <file-name>.yaml",
"oc label node node1 zone=us",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1",
"oc label node node1 zone=emea",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc describe pod pod-s1",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]",
"oc apply -f project.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.30.3",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3",
"Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector",
"oc edit namespace <name>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3",
"oc label <resource> <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3",
"apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 evictionLimits: total: 20 5 profiles: 6 - AffinityAndTaints - TopologyAndDuplicates - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1",
"apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated",
"apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1",
"oc describe pod nginx -n default",
"Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/nodes/controlling-pod-placement-onto-nodes-scheduling |
function::user_string2_utf32 | function::user_string2_utf32 Name function::user_string2_utf32 - Retrieves UTF-32 string from user memory with alternative error string Synopsis Arguments addr The user address to retrieve the string from err_msg The error message to return when data isn't available Description This function returns a null terminated UTF-8 string converted from the UTF-32 string at a given user memory address. Reports the given error message on string copy fault or conversion error. | [
"user_string2_utf32:string(addr:long,err_msg:string)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-string2-utf32 |
Appendix A. Fuse Console Configuration Properties | Appendix A. Fuse Console Configuration Properties By default, the Fuse Console configuration is defined in the hawtconfig.json file. You can customize the Fuse Console configuration information, such as title, logo, and login page information. Table A.1, "Fuse Console Configuration Properties" provides a description of the properties and lists whether or not each property requires a value. Table A.1. Fuse Console Configuration Properties Section Property Name Default Value Description Required? About Title Red Hat Fuse Management Console The title that shows on the About page of the Fuse Console. Required productInfo Empty value Product information that shows on the About page of the Fuse Console. Optional additionalInfo Empty value Any additional information that shows on the About page of the Fuse Console. Optional | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/r_fuse-console-configuration |
Chapter 132. SSH | Chapter 132. SSH Since Camel 2.10 Both producer and consumer are supported The SSH component enables access to SSH servers so that you can send an SSH command and process the response. 132.1. Dependencies When using camel-ssh with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ssh-starter</artifactId> </dependency> 132.2. URI format 132.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 132.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 132.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 132.4. Component Options The SSH component supports 25 options, which are listed below. Name Description Default Type failOnUnknownHost (common) Specifies whether a connection to an unknown host should fail or not. This value is only checked when the property knownHosts is set. false boolean knownHostsResource (common) Sets the resource path for a known_hosts file. String timeout (common) Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds. 30000 long bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pollCommand (consumer) Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://... ) You may need to end your command with a newline, and that must be URL encoded %0A. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean channelType (advanced) Sets the channel type to pass to the Channel as part of command execution. Defaults to exec. exec String clientBuilder (advanced) Autowired Instance of ClientBuilder used by the producer or consumer to create a new SshClient. ClientBuilder compressions (advanced) Whether to use compression, and if so which. String configuration (advanced) Component configuration. SshConfiguration shellPrompt (advanced) Sets the shellPrompt to be dropped when response is read after command execution. String sleepForShellPrompt (advanced) Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds. 100 long healthCheckConsumerEnabled (health) Used for enabling or disabling all consumer based health checks from this component. true boolean healthCheckProducerEnabled (health) Used for enabling or disabling all producer based health checks from this component. Note By default all producer based health-checks are disabled. You can turn on producer checks globally by setting camel.health.producersEnabled=true . true boolean certResource (security) Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting. String certResourcePassword (security) Sets the password to use in loading certResource, if certResource is an encrypted key. String ciphers (security) Comma-separated list of allowed/supported ciphers in their order of preference. String kex (security) Comma-separated list of allowed/supported key exchange algorithms in their order of preference. String keyPairProvider (security) Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server. KeyPairProvider keyType (security) Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(... ) will be passed this value. From Camel 3.0.0 / 2.25.0, by default Camel will select the first available KeyPair that is loaded. Prior to this, a KeyType of 'ssh-rsa' was enforced by default. String macs (security) Comma-separated list of allowed/supported message authentication code algorithms in their order of preference. The MAC algorithm is used for data integrity protection. String password (security) Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null. String signatures (security) Comma-separated list of allowed/supported signature algorithms in their order of preference. String username (security) Sets the username to use in logging into the remote SSH server. String 132.5. Endpoint Options The SSH endpoint is configured using URI syntax: With the following path and query parameters: 132.5.1. Path Parameters (2 parameters) Name Description Default Type host (common) Required Sets the hostname of the remote SSH server. String port (common) Sets the port number for the remote SSH server. 22 int 132.5.2. Query Parameters (39 parameters) Name Description Default Type failOnUnknownHost (common) Specifies whether a connection to an unknown host should fail or not. This value is only checked when the property knownHosts is set. false boolean knownHostsResource (common) Sets the resource path for a known_hosts file. String timeout (common) Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds. 30000 long pollCommand (consumer) Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://... ) You may need to end your command with a newline, and that must be URL encoded %0A. String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean channelType (advanced) Sets the channel type to pass to the Channel as part of command execution. Defaults to exec. exec String clientBuilder (advanced) Autowired Instance of ClientBuilder used by the producer or consumer to create a new SshClient. ClientBuilder compressions (advanced) Whether to use compression, and if so which. String shellPrompt (advanced) Sets the shellPrompt to be dropped when response is read after command execution. String sleepForShellPrompt (advanced) Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds. 100 long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean certResource (security) Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting. String certResourcePassword (security) Sets the password to use in loading certResource, if certResource is an encrypted key. String ciphers (security) Comma-separated list of allowed/supported ciphers in their order of preference. String kex (security) Comma-separated list of allowed/supported key exchange algorithms in their order of preference. String keyPairProvider (security) Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server. KeyPairProvider keyType (security) Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(... ) will be passed this value. From Camel 3.0.0 / 2.25.0, by default Camel will select the first available KeyPair that is loaded. Prior to this, a KeyType of 'ssh-rsa' was enforced by default. String macs (security) Comma-separated list of allowed/supported message authentication code algorithms in their order of preference. The MAC algorithm is used for data integrity protection. String password (security) Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null. String signatures (security) Comma-separated list of allowed/supported signature algorithms in their order of preference. String username (security) Sets the username to use in logging into the remote SSH server. String 132.6. Message Headers The SSH component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelSshUsername (common) Constant: USERNAME_HEADER The user name. String CamelSshPassword (common) Constant: PASSWORD_HEADER The password. String CamelSshStderr (common) Constant: STDERR The value of this header is a InputStream with the standard error stream of the executable. InputStream CamelSshExitValue (common) Constant: EXIT_VALUE The value of this header is the exit value that is returned, after the execution. By convention a non-zero status exit value indicates abnormal termination. Note that the exit value is OS dependent. Integer 132.7. Usage as a Producer endpoint When the SSH Component is used as a Producer (`.to("ssh://... ")`), it sends the message body as the command to execute on the remote SSH server. XML DSL example Note that the command has an XML encoded newline (` `). <route id="camel-example-ssh-producer"> <from uri="direct:exampleSshProducer"/> <setBody> <constant>features:list </constant> </setBody> <to uri="ssh://karaf:karaf@localhost:8101"/> <log message="USD{body}"/> </route> 132.8. Authentication The SSH Component can authenticate against the remote SSH server using one of two mechanisms: Public Key certificate Username/password Configuring how the SSH Component does authentication is based on how and which options are set. First, it will look to see if the certResource option has been set, and if so, use it to locate the referenced Public Key certificate and use that for authentication. If certResource is not set, it will look to see if a keyPairProvider has been set, and if so, it will use that for certificate-based authentication. If neither certResource nor keyPairProvider are set, it will use the username and password options for authentication. Even though the username and password are provided in the endpoint configuration and headers set with SshConstants.USERNAME_HEADER ( CamelSshUsername ) and SshConstants.PASSWORD_HEADER ( CamelSshPassword ), the endpoint configuration is surpassed and credentials set in the headers are used. The following route fragment shows an SSH polling consumer using a certificate from the classpath. XML DSL <route> <from uri="ssh://scott@localhost:8101?certResource=classpath:test_rsa&useFixedDelay=true&delay=5000&pollCommand=features:list%0A"/> <log message="USD{body}"/> </route> Java DSL from("ssh://scott@localhost:8101?certResource=classpath:test_rsa&useFixedDelay=true&delay=5000&pollCommand=features:list%0A") .log("USD{body}"); An example of using Public Key authentication is provided in examples/camel-example-ssh-security . 132.9. Certificate Dependencies You need to add some additional runtime dependencies if you use certificate-based authentication. You may need to use later versions depending on what version of Camel you are using. The component uses sshd-core library which is based on either bouncycastle or eddsa security providers. camel-ssh is picking explicitly bouncycastle as security provider. <dependency> <groupId>org.apache.sshd</groupId> <artifactId>sshd-core</artifactId> <version>2.8.0</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpg-jdk18on</artifactId> <version>1.71</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk18on</artifactId> <version>1.71</version> </dependency> 132.10. Spring Boot Auto-Configuration The component supports 26 options, which are listed below. Name Description Default Type camel.component.ssh.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ssh.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.ssh.cert-resource Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting. String camel.component.ssh.cert-resource-password Sets the password to use in loading certResource, if certResource is an encrypted key. String camel.component.ssh.channel-type Sets the channel type to pass to the Channel as part of command execution. Defaults to exec. exec String camel.component.ssh.ciphers Comma-separated list of allowed/supported ciphers in their order of preference. String camel.component.ssh.client-builder Instance of ClientBuilder used by the producer or consumer to create a new SshClient. The option is a org.apache.sshd.client.ClientBuilder type. ClientBuilder camel.component.ssh.compressions Whether to use compression, and if so which. String camel.component.ssh.configuration Component configuration. The option is a org.apache.camel.component.ssh.SshConfiguration type. SshConfiguration camel.component.ssh.enabled Whether to enable auto configuration of the ssh component. This is enabled by default. Boolean camel.component.ssh.fail-on-unknown-host Specifies whether a connection to an unknown host should fail or not. This value is only checked when the property knownHosts is set. false Boolean camel.component.ssh.health-check-consumer-enabled Used for enabling or disabling all consumer based health checks from this component. true Boolean camel.component.ssh.health-check-producer-enabled Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. true Boolean camel.component.ssh.kex Comma-separated list of allowed/supported key exchange algorithms in their order of preference. String camel.component.ssh.key-pair-provider Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server. The option is a org.apache.sshd.common.keyprovider.KeyPairProvider type. KeyPairProvider camel.component.ssh.key-type Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(... ) will be passed this value. From Camel 3.0.0 / 2.25.0, by default Camel will select the first available KeyPair that is loaded. Prior to this, a KeyType of 'ssh-rsa' was enforced by default. String camel.component.ssh.known-hosts-resource Sets the resource path for a known_hosts file. String camel.component.ssh.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.ssh.macs Comma-separated list of allowed/supported message authentication code algorithms in their order of preference. The MAC algorithm is used for data integrity protection. String camel.component.ssh.password Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null. String camel.component.ssh.poll-command Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://... ) You may need to end your command with a newline, and that must be URL encoded %0A. String camel.component.ssh.shell-prompt Sets the shellPrompt to be dropped when response is read after command execution. String camel.component.ssh.signatures Comma-separated list of allowed/supported signature algorithms in their order of preference. String camel.component.ssh.sleep-for-shell-prompt Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds. 100 Long camel.component.ssh.timeout Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds. 30000 Long camel.component.ssh.username Sets the username to use in logging into the remote SSH server. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ssh-starter</artifactId> </dependency>",
"ssh:[username[:password]@]host[:port][?options]",
"ssh:host:port",
"<route id=\"camel-example-ssh-producer\"> <from uri=\"direct:exampleSshProducer\"/> <setBody> <constant>features:list </constant> </setBody> <to uri=\"ssh://karaf:karaf@localhost:8101\"/> <log message=\"USD{body}\"/> </route>",
"<route> <from uri=\"ssh://scott@localhost:8101?certResource=classpath:test_rsa&useFixedDelay=true&delay=5000&pollCommand=features:list%0A\"/> <log message=\"USD{body}\"/> </route>",
"from(\"ssh://scott@localhost:8101?certResource=classpath:test_rsa&useFixedDelay=true&delay=5000&pollCommand=features:list%0A\") .log(\"USD{body}\");",
"<dependency> <groupId>org.apache.sshd</groupId> <artifactId>sshd-core</artifactId> <version>2.8.0</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpg-jdk18on</artifactId> <version>1.71</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk18on</artifactId> <version>1.71</version> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-ssh-component-starter |
Chapter 7. Viewing the status of the QuayRegistry object | Chapter 7. Viewing the status of the QuayRegistry object Lifecycle observability for a given Red Hat Quay deployment is reported in the status section of the corresponding QuayRegistry object. The Red Hat Quay Operator constantly updates this section, and this should be the first place to look for any problems or state changes in Red Hat Quay or its managed dependencies. 7.1. Viewing the registry endpoint Once Red Hat Quay is ready to be used, the status.registryEndpoint field will be populated with the publicly available hostname of the registry. 7.2. Viewing the config editor endpoint Access Red Hat Quay's UI-based config editor using status.configEditorEndpoint . 7.3. Viewing the config editor credentials secret The username and password for the config editor UI will be stored in a Secret in the same namespace as the QuayRegistry referenced by status.configEditorCredentialsSecret . 7.4. Viewing the version of Red Hat Quay in use The current version of Red Hat Quay that is running will be reported in status.currentVersion . 7.5. Viewing the conditions of your Red Hat Quay deployment Certain conditions will be reported in status.conditions . | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-quayregistry-status |
Chapter 19. Task filtering in Business Central | Chapter 19. Task filtering in Business Central Business Central provides built-in filters to help you search tasks. You can filter tasks by attributes such as Status , Filter By , Process Definition Id , and Created On . It is also possible to create custom task filters using the Advanced Filters option. The newly created custom filter is added to the Saved Filters pane, which is accessible by clicking on the star icon on the left of the Task Inbox page. 19.1. Managing task list columns In the task list on the Task Inbox and Manage Tasks windows, you can specify what columns to view and you can change the order of columns to better manage task information. Note Only users with the process-admin role can view the task list on the Manage Tasks page. Users with the admin role can access the Manage Tasks page, however they see only an empty task list. Procedure In Business Central, go to Menu Manage Tasks or Menu Track Task Inbox . On the Manage Task or Task Inbox page, click the Show/hide columns icon to the right of Bulk Actions . Select or deselect columns to display. As you make changes to the list, columns in the task list appear or disappear. To rearrange the columns, drag the column heading to a new position. Note that your pointer must change to the icon shown in the following illustration before you can drag the column: To save your changes as a filter, click Save Filters , enter a name, and click Save . To use your new filter, click the Saved Filters icon (star) on the left of the screen and select your filter from the list. 19.2. Filtering tasks using basic filters Business Central provides basic filters for filtering and searching through tasks based on their attributes such as Status , Filter By , Process Definition Id , and Created On . Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the filter icon on the left of the page to expand the Filters pane and select the filters you want to use: Status : Filter tasks based on their status. Filter By : Filter tasks based on Id , Task , Correlation Key , Actual Owner , or Process Instance Description attribute. Process Definition Id : Filter tasks based on process definition ids. Created On : Filter tasks based on their creation date. You can use the Advanced Filters option to create custom filters in Business Central. 19.3. Filtering tasks using advanced filters You can create custom task filters using the Advanced Filters option in Business Central. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the advanced filters icon on the left of the page to expand the Advanced Filters panel. In the Advanced Filters panel, enter the filter name and description, and click Add New . Select an attribute from the Select column drop-down list, such as Name . The content of the drop-down changes to Name != value1 . Click the drop-down again and choose the required logical query. For the Name attribute, choose equals to . Change the value of the text field to the name of the task you want to filter. Note The name must match the value defined in the business process of the project. Click Save and the tasks are filtered according to the filter definition. Click the star icon to open the Saved Filters pane. In the Saved Filters pane, you can view the saved advanced filters. 19.4. Managing tasks using default filter You can set a task filter as a default filter using the Saved Filter option in Business Central. A default filter will be executed every time when the page is open by the user. Procedure In Business Central, go to Menu Track Task Inbox or go to Menu Manage Tasks . On the Task Inbox page or the Manage Tasks page, click the star icon on the left of the page to expand the Saved Filters panel. In the Saved Filters panel, you can view the saved advanced filters. Default filter selection for Tasks or Task Inbox In the Saved Filters panel, set a saved task filter as the default filter. 19.5. Viewing task variables using basic filters Business Central provides basic filters to view task variables in Manage Tasks and Task Inbox . You can view the task variables of the task as columns using Show/hide columns . Procedure In Business Central, go to Menu Manage Tasks or go to Menu Track Task Inbox . On the Task Inbox page, click the filter icon on the left of the page to expand the Filters panel In the Filters panel, select the Task Name . The filter is applied to the current task list. Click Show/hide columns on the upper right of the tasks list and the task variables of the specified task id will be displayed. Click the star icon to open the Saved Filters panel. In the Saved Filters panel, you can view all the saved advanced filters. 19.6. Viewing task variables using advanced filters You can use the Advanced Filters option in Business Central to view task variables in Manage Tasks and Task Inbox . When you create a filter with the task defined, you can view the task variables of the task as columns using Show/hide columns . Procedure In Business Central, go to Menu Manage Tasks or go to Menu Track Task Inbox . On the Manage Tasks page or the Task Inbox page, click the advanced filters icon to expand the Advanced Filters panel. In the Advanced Filters panel, enter the name and description of the filter, and click Add New . From the Select column list, select the name attribute. The value will change to name != value1 . From the Select column list, select equals to for the logical query. In the text field, enter the name of the task. Click Save and the filter is applied on the current task list. Click Show/hide columns on the upper right of the tasks list and the task variables of the specified task id will be displayed. Click the star icon to open the Saved Filters panel. In the Saved Filters panel, you can view all the saved advanced filters. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/interacting-with-processes-filter-tasks-con |
Chapter 30. Managing DNS forwarding in IdM | Chapter 30. Managing DNS forwarding in IdM Follow these procedures to configure DNS global forwarders and DNS forward zones in the Identity Management (IdM) Web UI, the IdM CLI, and using Ansible: The two roles of an IdM DNS server DNS forward policies in IdM Adding a global forwarder in the IdM Web UI Adding a global forwarder in the CLI Adding a DNS Forward Zone in the IdM Web UI Adding a DNS Forward Zone in the CLI Establishing a DNS Global Forwarder in IdM using Ansible Ensuring the presence of a DNS global forwarder in IdM using Ansible Ensuring the absence of a DNS global forwarder in IdM using Ansible Ensuring DNS Global Forwarders are disabled in IdM using Ansible Ensuring the presence of a DNS Forward Zone in IdM using Ansible Ensuring a DNS Forward Zone has multiple forwarders in IdM using Ansible Ensuring a DNS Forward Zone is disabled in IdM using Ansible Ensuring the absence of a DNS Forward Zone in IdM using Ansible 30.1. The two roles of an IdM DNS server DNS forwarding affects how a DNS service answers DNS queries. By default, the Berkeley Internet Name Domain (BIND) service integrated with IdM acts as both an authoritative and a recursive DNS server: Authoritative DNS server When a DNS client queries a name belonging to a DNS zone for which the IdM server is authoritative, BIND replies with data contained in the configured zone. Authoritative data always takes precedence over any other data. Recursive DNS server When a DNS client queries a name for which the IdM server is not authoritative, BIND attempts to resolve the query using other DNS servers. If forwarders are not defined, BIND asks the root servers on the Internet and uses a recursive resolution algorithm to answer the DNS query. In some cases, it is not desirable to let BIND contact other DNS servers directly and perform the recursion based on data available on the Internet. You can configure BIND to use another DNS server, a forwarder , to resolve the query. When you configure BIND to use a forwarder, queries and answers are forwarded back and forth between the IdM server and the forwarder, and the IdM server acts as the DNS cache for non-authoritative data. 30.2. DNS forward policies in IdM IdM supports the first and only standard BIND forward policies, as well as the none IdM-specific forward policy. Forward first (default) The IdM BIND service forwards DNS queries to the configured forwarder. If a query fails because of a server error or timeout, BIND falls back to the recursive resolution using servers on the Internet. The forward first policy is the default policy, and it is suitable for optimizing DNS traffic. Forward only The IdM BIND service forwards DNS queries to the configured forwarder. If a query fails because of a server error or timeout, BIND returns an error to the client. The forward only policy is recommended for environments with split DNS configuration. None (forwarding disabled) DNS queries are not forwarded with the none forwarding policy. Disabling forwarding is only useful as a zone-specific override for global forwarding configuration. This option is the IdM equivalent of specifying an empty list of forwarders in BIND configuration. Note You cannot use forwarding to combine data in IdM with data from other DNS servers. You can only forward queries for specific subzones of the primary zone in IdM DNS. By default, the BIND service does not forward queries to another server if the queried DNS name belongs to a zone for which the IdM server is authoritative. In such a situation, if the queried DNS name cannot be found in the IdM database, the NXDOMAIN answer is returned. Forwarding is not used. Example 30.1. Example Scenario The IdM server is authoritative for the test.example. DNS zone. BIND is configured to forward queries to the DNS server with the 192.0.2.254 IP address. When a client sends a query for the nonexistent.test.example. DNS name, BIND detects that the IdM server is authoritative for the test.example. zone and does not forward the query to the 192.0.2.254. server. As a result, the DNS client receives the NXDomain error message, informing the user that the queried domain does not exist. 30.3. Adding a global forwarder in the IdM Web UI Follow this procedure to add a global DNS forwarder in the Identity Management (IdM) Web UI. Prerequisites You are logged in to the IdM WebUI as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure In the IdM Web UI, select Network Services DNS Global Configuration DNS . In the DNS Global Configuration section, click Add . Specify the IP address of the DNS server that will receive forwarded DNS queries. Select the Forward policy . Click Save at the top of the window. Verification Select Network Services DNS Global Configuration DNS . Verify that the global forwarder, with the forward policy you specified, is present and enabled in the IdM Web UI. 30.4. Adding a global forwarder in the CLI Follow this procedure to add a global DNS forwarder by using the command line (CLI). Prerequisites You are logged in as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure Use the ipa dnsconfig-mod command to add a new global forwarder. Specify the IP address of the DNS forwarder with the --forwarder option. Verification Use the dnsconfig-show command to display global forwarders. 30.5. Adding a DNS Forward Zone in the IdM Web UI Follow this procedure to add a DNS forward zone in the Identity Management (IdM) Web UI. Important Do not use forward zones unless absolutely required. Forward zones are not a standard solution, and using them can lead to unexpected and problematic behavior. If you must use forward zones, limit their use to overriding a global forwarding configuration. When creating a new DNS zone, Red Hat recommends to always use standard DNS delegation using nameserver (NS) records and to avoid forward zones. In most cases, using a global forwarder is sufficient, and forward zones are not necessary. Prerequisites You are logged in to the IdM WebUI as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure In the IdM Web UI, select Network Services DNS Forward Zones DNS . In the DNS Forward Zones section, click Add . In the Add DNS forward zone window, specify the forward zone name. Click the Add button and specify the IP address of a DNS server to receive the forwarding request. You can specify multiple forwarders per forward zone. Select the Forward policy . Click Add at the bottom of the window to add the new forward zone. Verification In the IdM Web UI, select Network Services DNS Forward Zones DNS . Verify that the forward zone you created, with the forwarders and forward policy you specified, is present and enabled in the IdM Web UI. 30.6. Adding a DNS Forward Zone in the CLI Follow this procedure to add a DNS forward zone by using the command line (CLI). Important Do not use forward zones unless absolutely required. Forward zones are not a standard solution, and using them can lead to unexpected and problematic behavior. If you must use forward zones, limit their use to overriding a global forwarding configuration. When creating a new DNS zone, Red Hat recommends to always use standard DNS delegation using nameserver (NS) records and to avoid forward zones. In most cases, using a global forwarder is sufficient, and forward zones are not necessary. Prerequisites You are logged in as IdM administrator. You know the Internet Protocol (IP) address of the DNS server to forward queries to. Procedure Use the dnsforwardzone-add command to add a new forward zone. Specify at least one forwarder with the --forwarder option if the forward policy is not none , and specify the forward policy with the --forward-policy option. Verification Use the dnsforwardzone-show command to display the DNS forward zone you just created. 30.7. Establishing a DNS Global Forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to establish a DNS Global Forwarder in IdM. In the example procedure below, the IdM administrator creates a DNS global forwarder to a DNS server with an Internet Protocol (IP) v4 address of 8.8.6.6 and IPv6 address of 2001:4860:4860::8800 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the set-configuration.yml Ansible playbook file. For example: Open the establish-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to establish a global forwarder in IdM DNS . In the tasks section, change the name of the task to Create a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 8.8.6.6 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:4860:4860::8800 . Verify the port value is set to 53 . Change the forward_policy to first . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory 30.8. Ensuring the presence of a DNS global forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the presence of a DNS global forwarder in IdM. In the example procedure below, the IdM administrator ensures the presence of a DNS global forwarder to a DNS server with an Internet Protocol (IP) v4 address of 7.7.9.9 and IP v6 address of 2001:db8::1:0 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-presence-of-a-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the presence of a global forwarder in IdM DNS . In the tasks section, change the name of the task to Ensure the presence of a DNS global forwarder to 7.7.9.9 and 2001:db8::1:0 on port 53 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 7.7.9.9 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:db8::1:0 . Verify the port value is set to 53 . Change the state to present . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory 30.9. Ensuring the absence of a DNS global forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the absence of a DNS global forwarder in IdM. In the example procedure below, the IdM administrator ensures the absence of a DNS global forwarder with an Internet Protocol (IP) v4 address of 8.8.6.6 and IP v6 address of 2001:4860:4860::8800 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-absence-of-a-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the absence of a global forwarder in IdM DNS . In the tasks section, change the name of the task to Ensure the absence of a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 on port 53 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 8.8.6.6 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:4860:4860::8800 . Verify the port value is set to 53 . Set the action variable to member . Verify the state is set to absent . This the modified Ansible playbook file for the current example: Important If you only use the state: absent option in your playbook without also using action: member , the playbook fails. Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory The action: member option in ipadnsconfig ansible-freeipa modules 30.10. Ensuring DNS Global Forwarders are disabled in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure DNS Global Forwarders are disabled in IdM. In the example procedure below, the IdM administrator ensures that the forwarding policy for the global forwarder is set to none , which effectively disables the global forwarder. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Verify the contents of the disable-global-forwarders.yml Ansible playbook file which is already configured to disable all DNS global forwarders. For example: Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory 30.11. Ensuring the presence of a DNS Forward Zone in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the presence of a DNS Forward Zone in IdM. In the example procedure below, the IdM administrator ensures the presence of a DNS forward zone for example.com to a DNS server with an Internet Protocol (IP) address of 8.8.8.8 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-presence-forwardzone.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the presence of a dnsforwardzone in IdM DNS . In the tasks section, change the name of the task to Ensure presence of a dnsforwardzone for example.com to 8.8.8.8 . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . In the forwarders section: Remove the ip_address and port lines. Add the IP address of the DNS server to receive forwarded requests by specifying it after a dash: Add the forwardpolicy variable and set it to first . Add the skip_overlap_check variable and set it to true . Change the state variable to present . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory. 30.12. Ensuring a DNS Forward Zone has multiple forwarders in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure a DNS Forward Zone in IdM has multiple forwarders. In the example procedure below, the IdM administrator ensures the DNS forward zone for example.com is forwarding to 8.8.8.8 and 4.4.4.4 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-presence-multiple-forwarders.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the presence of multiple forwarders in a dnsforwardzone in IdM DNS . In the tasks section, change the name of the task to Ensure presence of 8.8.8.8 and 4.4.4.4 forwarders in dnsforwardzone for example.com . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . In the forwarders section: Remove the ip_address and port lines. Add the IP address of the DNS servers you want to ensure are present, preceded by a dash: Change the state variable to present. This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory. 30.13. Ensuring a DNS Forward Zone is disabled in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure a DNS Forward Zone is disabled in IdM. In the example procedure below, the IdM administrator ensures the DNS forward zone for example.com is disabled. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-disabled-forwardzone.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure a dnsforwardzone is disabled in IdM DNS . In the tasks section, change the name of the task to Ensure a dnsforwardzone for example.com is disabled . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . Remove the entire forwarders section. Change the state variable to disabled . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory. 30.14. Ensuring the absence of a DNS Forward Zone in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the absence of a DNS Forward Zone in IdM. In the example procedure below, the IdM administrator ensures the absence of a DNS forward zone for example.com . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-absence-forwardzone.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the absence of a dnsforwardzone in IdM DNS . In the tasks section, change the name of the task to Ensure the absence of a dnsforwardzone for example.com . In the tasks section, change the ipadnsconfig heading to ipadnsforwardzone . In the ipadnsforwardzone section: Add the ipaadmin_password variable and set it to your IdM administrator password. Add the name variable and set it to example.com . Remove the entire forwarders section. Leave the state variable as absent . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsforwardzone.md file in the /usr/share/doc/ansible-freeipa/ directory. | [
"[user@server ~]USD ipa dnsconfig-mod --forwarder= 10.10.0.1 Server will check DNS forwarder(s). This may take some time, please wait Global forwarders: 10.10.0.1 IPA DNS servers: server.example.com",
"[user@server ~]USD ipa dnsconfig-show Global forwarders: 10.10.0.1 IPA DNS servers: server.example.com",
"[user@server ~]USD ipa dnsforwardzone-add forward.example.com. --forwarder= 10.10.0.14 --forwarder= 10.10.1.15 --forward-policy=first Zone name: forward.example.com. Zone forwarders: 10.10.0.14, 10.10.1.15 Forward policy: first",
"[user@server ~]USD ipa dnsforwardzone-show forward.example.com. Zone name: forward.example.com. Zone forwarders: 10.10.0.14, 10.10.1.15 Forward policy: first",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp set-configuration.yml establish-global-forwarder.yml",
"--- - name: Playbook to establish a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 ipadnsconfig: forwarders: - ip_address: 8.8.6.6 - ip_address: 2001:4860:4860::8800 port: 53 forward_policy: first allow_sync_ptr: true",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file establish-global-forwarder.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-presence-of-a-global-forwarder.yml",
"--- - name: Playbook to ensure the presence of a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the presence of a DNS global forwarder to 7.7.9.9 and 2001:db8::1:0 on port 53 ipadnsconfig: forwarders: - ip_address: 7.7.9.9 - ip_address: 2001:db8::1:0 port: 53 state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-of-a-global-forwarder.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-absence-of-a-global-forwarder.yml",
"--- - name: Playbook to ensure the absence of a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the absence of a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 on port 53 ipadnsconfig: forwarders: - ip_address: 8.8.6.6 - ip_address: 2001:4860:4860::8800 port: 53 action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-absence-of-a-global-forwarder.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cat disable-global-forwarders.yml --- - name: Playbook to disable global DNS forwarders hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Disable global forwarders. ipadnsconfig: forward_policy: none",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file disable-global-forwarders.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-presence-forwardzone.yml",
"- 8.8.8.8",
"--- - name: Playbook to ensure the presence of a dnsforwardzone in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the presence of a dnsforwardzone for example.com to 8.8.8.8 ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com forwarders: - 8.8.8.8 forwardpolicy: first skip_overlap_check: true state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-forwardzone.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-presence-multiple-forwarders.yml",
"- 8.8.8.8 - 4.4.4.4",
"--- - name: name: Playbook to ensure the presence of multiple forwarders in a dnsforwardzone in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of 8.8.8.8 and 4.4.4.4 forwarders in dnsforwardzone for example.com ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com forwarders: - 8.8.8.8 - 4.4.4.4 state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-multiple-forwarders.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-disabled-forwardzone.yml",
"--- - name: Playbook to ensure a dnsforwardzone is disabled in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure a dnsforwardzone for example.com is disabled ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com state: disabled",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-disabled-forwardzone.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-absence-forwardzone.yml",
"--- - name: Playbook to ensure the absence of a dnsforwardzone in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the absence of a dnsforwardzone for example.com ipadnsforwardzone: ipaadmin_password: \"{{ ipaadmin_password }}\" name: example.com state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-absence-forwardzone.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/managing-dns-forwarding-in-idm_using-ansible-to-install-and-manage-identity-management |
9.16. Write Changes to Disk | 9.16. Write Changes to Disk The installer prompts you to confirm the partitioning options that you selected. Click Write changes to disk to allow the installer to partition your hard drive and install Red Hat Enterprise Linux. Figure 9.47. Writing storage configuration to disk If you are certain that you want to proceed, click Write changes to disk . Warning Up to this point in the installation process, the installer has made no lasting changes to your computer. When you click Write changes to disk , the installer will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, click Go back . To cancel installation completely, switch off your computer. To switch off most computers at this stage, press the power button and hold it down for a few seconds. After you click Write changes to disk , allow the installation process to complete. If the process is interrupted (for example, by you switching off or resetting the computer, or by a power outage) you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Write_changes_to_disk-x86 |
Chapter 6. Managing user passwords in IdM | Chapter 6. Managing user passwords in IdM 6.1. Who can change IdM user passwords and how Regular users without the permission to change other users' passwords can change only their own personal password. The new password must meet the IdM password policies applicable to the groups of which the user is a member. For details on configuring password policies, see Defining IdM password policies . Administrators and users with password change rights can set initial passwords for new users and reset passwords for existing users. These passwords: Do not have to meet the IdM password policies. Expire after the first successful login. When this happens, IdM prompts the user to change the expired password immediately. To disable this behavior, see Enabling password reset in IdM without prompting the user for a password change at the login . Note The LDAP Directory Manager (DM) user can change user passwords using LDAP tools. The new password can override any IdM password policies. Passwords set by DM do not expire after the first login. 6.2. Changing your user password in the IdM Web UI As an Identity Management (IdM) user, you can change your user password in the IdM Web UI. Prerequisites You are logged in to the IdM Web UI. Procedure In the upper right corner, click User name Change password . Figure 6.1. Resetting Password Enter the current and new passwords. 6.3. Resetting another user's password in the IdM Web UI As an administrative user of Identity Management (IdM), you can change passwords for other users in the IdM Web UI. Prerequisites You are logged in to the IdM Web UI as an administrative user. Procedure Select Identity Users . Click the name of the user to edit. Click Actions Reset password . Figure 6.2. Resetting Password Enter the new password, and click Reset Password . Figure 6.3. Confirming New Password 6.4. Resetting the Directory Manager user password If you lose the Identity Management (IdM) Directory Manager password, you can reset it. Prerequisites You have root access to an IdM server. Procedure Generate a new password hash by using the pwdhash command. For example: By specifying the path to the Directory Server configuration, you automatically use the password storage scheme set in the nsslapd-rootpwstoragescheme attribute to encrypt the new password. On every IdM server in your topology, execute the following steps: Stop all IdM services installed on the server: Edit the /etc/dirsrv/IDM-EXAMPLE-COM/dse.ldif file and set the nsslapd-rootpw attribute to the value generated by the pwdhash command: Start all IdM services installed on the server: 6.5. Changing your user password or resetting another user's password in IdM CLI You can change your user password using the Identity Management (IdM) command-line interface (CLI). If you are an administrative user, you can use the CLI to reset another user's password. Prerequisites You have obtained a ticket-granting ticket (TGT) for an IdM user. If you are resetting another user's password, you must have obtained a TGT for an administrative user in IdM. Procedure Enter the ipa user-mod command with the name of the user and the --password option. The command will prompt you for the new password. Note You can also use the ipa passwd idm_user command instead of ipa user-mod . 6.6. Enabling password reset in IdM without prompting the user for a password change at the login By default, when an administrator resets another user's password, the password expires after the first successful login. As IdM Directory Manager, you can specify the following privileges for individual IdM administrators: They can perform password change operations without requiring users to change their passwords subsequently on their first login. They can bypass the password policy so that no strength or history enforcement is applied. Warning Bypassing the password policy can be a security threat. Exercise caution when selecting users to whom you grant these additional privileges. Prerequisites You know the Directory Manager password. Procedure On every Identity Management (IdM) server in the domain, make the following changes: Enter the ldapmodify command to modify LDAP entries. Specify the name of the IdM server and the 389 port and press Enter: Enter the Directory Manager password. Enter the distinguished name for the ipa_pwd_extop password synchronization entry and press Enter: Specify the modify type of change and press Enter: Specify what type of modification you want LDAP to execute and to which attribute. Press Enter: Specify the administrative user accounts in the passSyncManagersDNs attribute. The attribute is multi-valued. For example, to grant the admin user the password resetting powers of Directory Manager: Press Enter twice to stop editing the entry. The whole procedure looks as follows: The admin user, listed under passSyncManagerDNs , now has the additional privileges. 6.7. Checking if an IdM user's account is locked As an Identity Management (IdM) administrator, you can check if an IdM user's account is locked. For that, you must compare a user's maximum allowed number of failed login attempts with the number of the user's actual failed logins. Prerequisites You have obtained the ticket-granting ticket (TGT) of an administrative user in IdM. Procedure Display the status of the user account to see the number of failed logins: Display the number of allowed login attempts for a particular user: Log in to the IdM Web UI as IdM administrator. Open the Identity Users Active users tab. Click the user name to open the user settings. In the Password policy section, locate the Max failures item. Compare the number of failed logins as displayed in the output of the ipa user-status command with the Max failures number displayed in the IdM Web UI. If the number of failed logins equals that of maximum allowed login attempts, the user account is locked. Additional resources Unlocking user accounts after password failures in IdM 6.8. Unlocking user accounts after password failures in IdM If a user attempts to log in using an incorrect password a certain number of times, Identity Management (IdM) locks the user account, which prevents the user from logging in. For security reasons, IdM does not display any warning message that the user account has been locked. Instead, the CLI prompt might continue asking the user for a password again and again. IdM automatically unlocks the user account after a specified amount of time has passed. Alternatively, you can unlock the user account manually with the following procedure. Prerequisites You have obtained the ticket-granting ticket of an IdM administrative user. Procedure To unlock a user account, use the ipa user-unlock command. After this, the user can log in again. Additional resources Checking if an IdM user's account is locked 6.9. Enabling the tracking of last successful Kerberos authentication for users in IdM For performance reasons, Identity Management (IdM) running in Red Hat Enterprise Linux 8 does not store the time stamp of the last successful Kerberos authentication of a user. As a consequence, certain commands, such as ipa user-status , do not display the time stamp. Prerequisites You have obtained the ticket-granting ticket (TGT) of an administrative user in IdM. You have root access to the IdM server on which you are executing the procedure. Procedure Display the currently enabled password plug-in features: The output shows that the KDC:Disable Last Success plug-in is enabled. The plug-in hides the last successful Kerberos authentication attempt from being visible in the ipa user-status output. Add the --ipaconfigstring= feature parameter for every feature to the ipa config-mod command that is currently enabled, except for KDC:Disable Last Success : This command enables only the AllowNThash plug-in. To enable multiple features, specify the --ipaconfigstring= feature parameter separately for each feature. Restart IdM: | [
"pwdhash -D /etc/dirsrv/slapd-IDM-EXAMPLE-COM password {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl stop",
"nsslapd-rootpw: {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl start",
"ipa user-mod idm_user --password Password: Enter Password again to verify: -------------------- Modified user \"idm_user\" --------------------",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password:",
"dn: cn=ipa_pwd_extop,cn=plugins,cn=config",
"changetype: modify",
"add: passSyncManagersDNs",
"passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password: dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ipa user-status example_user ----------------------- Account disabled: False ----------------------- Server: idm.example.com Failed logins: 8 Last successful authentication: N/A Last failed authentication: 20220229080317Z Time now: 2022-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------",
"ipa user-unlock idm_user ----------------------- Unlocked account \"idm_user\" -----------------------",
"ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success",
"ipa config-mod --ipaconfigstring='AllowNThash'",
"ipactl restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-passwords-in-idm_managing-users-groups-hosts |
13.5. Migrating from NIS to IdM | 13.5. Migrating from NIS to IdM There is no direct migration path from NIS to Identity Management. This is a manual process with three major steps: setting up netgroup entries in IdM, exporting the existing data from NIS, and importing that data into IdM. There are several options for how to set up the IdM environment and how to export data; the best option depends on the type of data and the overall network environment that you have. 13.5.1. Preparing Netgroup Entries in IdM The first step is to identify what kinds of identities are being managed by NIS. Frequently, a NIS server is used for either user entries or host entries, but not for both, which can simplify the data migration process. For user entries Determine what applications are using the user information in the NIS server. While some clients (like sudo ) require NIS netgroups, many clients can use Unix groups instead. If no netgroups are required, then simply create corresponding user accounts in IdM and delete the netgroups entirely. Otherwise, create the user entries in IdM and then create an IdM-managed netgroup and add those users as members. This is described in Section 13.3, "Creating Netgroups" . For host entries Whenever a host group is created in IdM, a corresponding shadow NIS group is automatically created. These netgroups can then be managed using the ipa-host-net-manage command. For a direct conversion It may be necessary to have an exact conversion, with every NIS user and host having an exact corresponding entry in IdM. In that case, each entry can be created using the original NIS names: Create an entry for every user referenced in a netgroup. Create an entry for every host referenced in a netgroup. Create a netgroup with the same name as the original netgroup. Add the users and hosts as direct members of the netgroup. Alternatively, add the users and hosts into IdM groups or other netgroups, and then add those groups as members to the netgroup. 13.5.2. Enabling the NIS Listener in Identity Management The IdM Directory Server can function as a limited NIS server. The slapi-nis plug-in sets up a special NIS listener that receives incoming NIS requests and manages the NIS maps within the Directory Server. Identity Management uses three NIS maps: passwd group netgroup Using IdM as an intermediate NIS server offers a reasonable way to handle NIS requests while migrating NIS clients and data. The slapi-nis plug-in is not enabled by default. To enable NIS for Identity Management: Obtain new Kerberos credentials as an IdM admin user. Enable the NIS listener and compatibility plug-ins: Restart the DNS and Directory Server service: 13.5.3. Exporting and Importing the Existing NIS Data NIS can contain information for users, groups, DNS and hosts, netgroups, and automount maps. Any of these entry types can be migrated over to the IdM server. Migration is performed by exporting the data using ypcat and then looping through that output and creating the IdM entries with the corresponding ipa *-add commands. While this could be done manually, it is easiest to script it. These examples use a shell script. 13.5.3.1. Importing User Entries The /etc/passwd file contains all of the NIS user information. These entries can be used to create IdM user accounts with UID, GID, gecos, shell, home directory, and name attributes that mirror the NIS entries. For example, this is nis-user.sh : #!/bin/sh # 1 is the nis domain, 2 is the nis master server ypcat -d USD1 -h USD2 passwd > /dev/shm/nis-map.passwd 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.passwd); do IFS=' ' username=USD(echo USDline|cut -f1 -d:) # Not collecting encrypted password because we need cleartext password to create kerberos key uid=USD(echo USDline|cut -f3 -d:) gid=USD(echo USDline|cut -f4 -d:) gecos=USD(echo USDline|cut -f5 -d:) homedir=USD(echo USDline|cut -f6 -d:) shell=USD(echo USDline|cut -f7 -d:) # Now create this entry echo passw0rd1|ipa user-add USDusername --first=NIS --last=USER --password --gidnumber=USDgid --uid=USDuid --gecos=USDgecos --homedir=USDhomedir --shell=USDshell ipa user-show USDusername done This can be run for a given NIS domain: Note This script does not migrate user passwords. Rather, it creates a temporary password which users are then prompted to change when they log in. 13.5.3.2. Importing Group Entries The /etc/group file contains all of the NIS group information. These entries can be used to create IdM user group accounts with the GID, gecos, shell, home directory, and name attributes that mirror the NIS entries. For example, this is nis-group.sh : #!/bin/sh # 1 is the nis domain, 2 is the nis master server ypcat -d USD1 -h USD2 group > /dev/shm/nis-map.group 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.group); do IFS=' ' groupname=USD(echo USDline|cut -f1 -d:) # Not collecting encrypted password because we need cleartext password to create kerberos key gid=USD(echo USDline|cut -f3 -d:) members=USD(echo USDline|cut -f4 -d:) # Now create this entry ipa group-add USDgroupname --desc=NIS_GROUP_USDgroupname --gid=USDgid if [ -n "USDmembers" ]; then ipa group-add-member USDgroupname --users=USDmembers fi ipa group-show USDgroupname done This can be run for a given NIS domain: 13.5.3.3. Importing Host Entries The /etc/hosts file contains all of the NIS host information. These entries can be used to create IdM host accounts that mirror the NIS entries. For example, this is nis-hosts.sh : #!/bin/sh # 1 is the nis domain, 2 is the nis master server ypcat -d USD1 -h USD2 hosts | egrep -v "localhost|127.0.0.1" > /dev/shm/nis-map.hosts 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.hosts); do IFS=' ' ipaddress=USD(echo USDline|awk '{print USD1}') hostname=USD(echo USDline|awk '{print USD2}') master=USD(ipa env xmlrpc_uri |tr -d '[:space:]'|cut -f3 -d:|cut -f3 -d/) domain=USD(ipa env domain|tr -d '[:space:]'|cut -f2 -d:) if [ USD(echo USDhostname|grep "\." |wc -l) -eq 0 ]; then hostname=USD(echo USDhostname.USDdomain) fi zone=USD(echo USDhostname|cut -f2- -d.) if [ USD(ipa dnszone-show USDzone 2>/dev/null | wc -l) -eq 0 ]; then ipa dnszone-add --name-server=USDmaster --admin-email=root.USDmaster fi ptrzone=USD(echo USDipaddress|awk -F. '{print USD3 "." USD2 "." USD1 ".in-addr.arpa."}') if [ USD(ipa dnszone-show USDptrzone 2>/dev/null|wc -l) -eq 0 ]; then ipa dnszone-add USDptrzone --name-server=USDmaster --admin-email=root.USDmaster fi # Now create this entry ipa host-add USDhostname --ip-address=USDipaddress ipa host-show USDhostname done This can be run for a given NIS domain: Note This script example does not account for special host scenarios, such as using aliases. 13.5.3.4. Importing Netgroup Entries The /etc/netgroup file contains all of the NIS netgroup information. These entries can be used to create IdM netgroup accounts that mirror the NIS entries. For example, this is nis-netgroup.sh : #!/bin/sh # 1 is the nis domain, 2 is the nis master server ypcat -k -d USD1 -h USD2 netgroup > /dev/shm/nis-map.netgroup 2>&1 IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.netgroup); do IFS=' ' netgroupname=USD(echo USDline|awk '{print USD1}') triples=USD(echo USDline|sed "s/^USDnetgroupname //") echo "ipa netgroup-add USDnetgroupname --desc=NIS_NG_USDnetgroupname" if [ USD(echo USDline|grep "(,"|wc -l) -gt 0 ]; then echo "ipa netgroup-mod USDnetgroupname --hostcat=all" fi if [ USD(echo USDline|grep ",,"|wc -l) -gt 0 ]; then echo "ipa netgroup-mod USDnetgroupname --usercat=all" fi for triple in USDtriples; do triple=USD(echo USDtriple|sed -e 's/-//g' -e 's/(//' -e 's/)//') if [ USD(echo USDtriple|grep ",.*,"|wc -l) -gt 0 ]; then hostname=USD(echo USDtriple|cut -f1 -d,) username=USD(echo USDtriple|cut -f2 -d,) domain=USD(echo USDtriple|cut -f3 -d,) hosts=""; users=""; doms=""; [ -n "USDhostname" ] && hosts="--hosts=USDhostname" [ -n "USDusername" ] && users="--users=USDusername" [ -n "USDdomain" ] && doms="--nisdomain=USDdomain" echo "ipa netgroup-add-member USDhosts USDusers USDdoms" else netgroup=USDtriple echo "ipa netgroup-add USDnetgroup --desc=NIS_NG_USDnetgroup" fi done done As explained briefly in Section 13.1, "About NIS and Identity Management" , NIS entries exist in a set of three values, called a triple. The triple is host,user,domain , but not every component is required; commonly, a triple only defines a host and domain or user and domain. The way this script is written, the ipa netgroup-add-member command always adds a host, user, and domain triple to the netgroup. if [ USD(echo USDtriple|grep ",.*,"|wc -l) -gt 0 ]; then hostname=USD(echo USDtriple|cut -f1 -d,) username=USD(echo USDtriple|cut -f2 -d,) domain=USD(echo USDtriple|cut -f3 -d,) hosts=""; users=""; doms=""; [ -n "USDhostname" ] && hosts="--hosts=USDhostname" [ -n "USDusername" ] && users="--users=USDusername" [ -n "USDdomain" ] && doms="--nisdomain=USDdomain" echo "ipa netgroup-add-member USDhosts USDusers USDdoms" Any missing element is added as a blank, so the triple is properly migrated. For example, for the triple server,,domain the options with the member add command are --hosts=server --users="" --nisdomain=domain . This can be run for a given NIS domain by specifying the NIS domain and NIS server: 13.5.3.5. Importing Automount Maps Automount maps are actually a series of nested and inter-related entries that define the location (the parent entry), and then associated keys and maps. While the data are the same in the NIS and IdM entries, the way that data are defined is different. The NIS information is exported and then used to construct an LDAP entry for the automount location and associated map; it then creates an entry for every configured key for the map. Unlike the other NIS migration script examples, this script takes options to create an automount location and a map name, along with the migrated NIS domain and server. #!/bin/sh # 1 is for the automount entry in ipa ipa automountlocation-add USD1 # 2 is the nis domain, 3 is the nis master server, 4 is the map name ypcat -k -d USD2 -h USD3 USD4 > /dev/shm/nis-map.USD4 2>&1 ipa automountmap-add USD1 USD4 basedn=USD(ipa env basedn|tr -d '[:space:]'|cut -f2 -d:) cat > /tmp/amap.ldif <<EOF dn: nis-domain=nisdomain.example.com+nis-map=USD4,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: USD3 nis-map: USD4 nis-base: automountmapname=USD4,cn=nis,cn=automount,USDbasedn nis-filter: (objectclass=*) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} EOF ldapadd -x -h USD3 -D "cn=directory manager" -w secret -f /tmp/amap.ldif IFS=USD'\n' for line in USD(cat /dev/shm/nis-map.USD4); do IFS=" " key=USD(echo "USDline" | awk '{print USD1}') info=USD(echo "USDline" | sed -e "s#^USDkey[ \t]*##") ipa automountkey-add nis USD4 --key="USDkey" --info="USDinfo" done This can be run for a given NIS domain: 13.5.4. Setting Weak Password Encryption for NIS User Authentication to IdM A NIS server can handle CRYPT password hashes. Once an existing NIS server is migrated to IdM (and its underlying LDAP database), it may still be necessary to preserve the NIS-supported CRYPT passwords. However, the LDAP server does not use CRYPT hashes by default. It uses salted SHA (SSHA) or SSHA-256. If the 389 Directory Server password hash is not changed, then NIS users cannot authenticate to the IdM domain, and kinit fails with password failures. To set the underlying 389 Directory Server to use CRYPT as the password hash, change the passwordStorageScheme attribute using ldapmodify : Note Changing the password storage scheme only applies the scheme to new passwords; it does not retroactively change the encryption method used for existing passwords. If weak crypto is required for password hashes, it is better to change the setting as early as possible so that more user passwords use the weaker password hash. | [
"kinit admin",
"ipa-nis-manage enable ipa-compat-manage enable",
"service rpcbind restart service dirsrv restart",
"#!/bin/sh 1 is the nis domain, 2 is the nis master server ypcat -d USD1 -h USD2 passwd > /dev/shm/nis-map.passwd 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.passwd); do IFS=' ' username=USD(echo USDline|cut -f1 -d:) # Not collecting encrypted password because we need cleartext password to create kerberos key uid=USD(echo USDline|cut -f3 -d:) gid=USD(echo USDline|cut -f4 -d:) gecos=USD(echo USDline|cut -f5 -d:) homedir=USD(echo USDline|cut -f6 -d:) shell=USD(echo USDline|cut -f7 -d:) # Now create this entry echo passw0rd1|ipa user-add USDusername --first=NIS --last=USER --password --gidnumber=USDgid --uid=USDuid --gecos=USDgecos --homedir=USDhomedir --shell=USDshell ipa user-show USDusername done",
"kinit admin ./nis-user.sh nisdomain nis-master.example.com",
"#!/bin/sh 1 is the nis domain, 2 is the nis master server ypcat -d USD1 -h USD2 group > /dev/shm/nis-map.group 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.group); do IFS=' ' groupname=USD(echo USDline|cut -f1 -d:) # Not collecting encrypted password because we need cleartext password to create kerberos key gid=USD(echo USDline|cut -f3 -d:) members=USD(echo USDline|cut -f4 -d:) # Now create this entry ipa group-add USDgroupname --desc=NIS_GROUP_USDgroupname --gid=USDgid if [ -n \"USDmembers\" ]; then ipa group-add-member USDgroupname --users=USDmembers fi ipa group-show USDgroupname done",
"kinit admin ./nis-group.sh nisdomain nis-master.example.com",
"#!/bin/sh 1 is the nis domain, 2 is the nis master server ypcat -d USD1 -h USD2 hosts | egrep -v \"localhost|127.0.0.1\" > /dev/shm/nis-map.hosts 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.hosts); do IFS=' ' ipaddress=USD(echo USDline|awk '{print USD1}') hostname=USD(echo USDline|awk '{print USD2}') master=USD(ipa env xmlrpc_uri |tr -d '[:space:]'|cut -f3 -d:|cut -f3 -d/) domain=USD(ipa env domain|tr -d '[:space:]'|cut -f2 -d:) if [ USD(echo USDhostname|grep \"\\.\" |wc -l) -eq 0 ]; then hostname=USD(echo USDhostname.USDdomain) fi zone=USD(echo USDhostname|cut -f2- -d.) if [ USD(ipa dnszone-show USDzone 2>/dev/null | wc -l) -eq 0 ]; then ipa dnszone-add --name-server=USDmaster --admin-email=root.USDmaster fi ptrzone=USD(echo USDipaddress|awk -F. '{print USD3 \".\" USD2 \".\" USD1 \".in-addr.arpa.\"}') if [ USD(ipa dnszone-show USDptrzone 2>/dev/null|wc -l) -eq 0 ]; then ipa dnszone-add USDptrzone --name-server=USDmaster --admin-email=root.USDmaster fi # Now create this entry ipa host-add USDhostname --ip-address=USDipaddress ipa host-show USDhostname done",
"kinit admin ./nis-hosts.sh nisdomain nis-master.example.com",
"#!/bin/sh 1 is the nis domain, 2 is the nis master server ypcat -k -d USD1 -h USD2 netgroup > /dev/shm/nis-map.netgroup 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.netgroup); do IFS=' ' netgroupname=USD(echo USDline|awk '{print USD1}') triples=USD(echo USDline|sed \"s/^USDnetgroupname //\") echo \"ipa netgroup-add USDnetgroupname --desc=NIS_NG_USDnetgroupname\" if [ USD(echo USDline|grep \"(,\"|wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --hostcat=all\" fi if [ USD(echo USDline|grep \",,\"|wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --usercat=all\" fi for triple in USDtriples; do triple=USD(echo USDtriple|sed -e 's/-//g' -e 's/(//' -e 's/)//') if [ USD(echo USDtriple|grep \",.*,\"|wc -l) -gt 0 ]; then hostname=USD(echo USDtriple|cut -f1 -d,) username=USD(echo USDtriple|cut -f2 -d,) domain=USD(echo USDtriple|cut -f3 -d,) hosts=\"\"; users=\"\"; doms=\"\"; [ -n \"USDhostname\" ] && hosts=\"--hosts=USDhostname\" [ -n \"USDusername\" ] && users=\"--users=USDusername\" [ -n \"USDdomain\" ] && doms=\"--nisdomain=USDdomain\" echo \"ipa netgroup-add-member USDhosts USDusers USDdoms\" else netgroup=USDtriple echo \"ipa netgroup-add USDnetgroup --desc=NIS_NG_USDnetgroup\" fi done done",
"if [ USD(echo USDtriple|grep \",.*,\"|wc -l) -gt 0 ]; then hostname=USD(echo USDtriple|cut -f1 -d,) username=USD(echo USDtriple|cut -f2 -d,) domain=USD(echo USDtriple|cut -f3 -d,) hosts=\"\"; users=\"\"; doms=\"\"; [ -n \"USDhostname\" ] && hosts=\"--hosts=USDhostname\" [ -n \"USDusername\" ] && users=\"--users=USDusername\" [ -n \"USDdomain\" ] && doms=\"--nisdomain=USDdomain\" echo \"ipa netgroup-add-member USDhosts USDusers USDdoms\"",
"kinit admin ./nis-hosts.sh nisdomain nis-master.example.com",
"#!/bin/sh 1 is for the automount entry in ipa ipa automountlocation-add USD1 2 is the nis domain, 3 is the nis master server, 4 is the map name ypcat -k -d USD2 -h USD3 USD4 > /dev/shm/nis-map.USD4 2>&1 ipa automountmap-add USD1 USD4 basedn=USD(ipa env basedn|tr -d '[:space:]'|cut -f2 -d:) cat > /tmp/amap.ldif <<EOF dn: nis-domain=nisdomain.example.com+nis-map=USD4,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: USD3 nis-map: USD4 nis-base: automountmapname=USD4,cn=nis,cn=automount,USDbasedn nis-filter: (objectclass=*) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} EOF ldapadd -x -h USD3 -D \"cn=directory manager\" -w secret -f /tmp/amap.ldif IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.USD4); do IFS=\" \" key=USD(echo \"USDline\" | awk '{print USD1}') info=USD(echo \"USDline\" | sed -e \"s#^USDkey[ \\t]*##\") ipa automountkey-add nis USD4 --key=\"USDkey\" --info=\"USDinfo\" done",
"kinit admin ./nis-hosts.sh location nisdomain nis-master.example.com map",
"ldapmodify -D \"cn=directory server\" -w secret -p 389 -h ipaserver.example.com dn: cn=config changetype: modify replace: passwordStorageScheme passwordStorageScheme: crypt"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/migrating-from-nis |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/migration_guide/making-open-source-more-inclusive |
Chapter 8. Interoperability | Chapter 8. Interoperability This chapter discusses how to use AMQ Ruby in combination with other AMQ components. For an overview of the compatibility of AMQ components, see the product introduction . 8.1. Interoperating with other AMQP clients AMQP messages are composed using the AMQP type system . This common format is one of the reasons AMQP clients in different languages are able to interoperate with each other. When sending messages, AMQ Ruby automatically converts language-native types to AMQP-encoded data. When receiving messages, the reverse conversion takes place. Note More information about AMQP types is available at the interactive type reference maintained by the Apache Qpid project. Table 8.1. AMQP types AMQP type Description null An empty value boolean A true or false value char A single Unicode character string A sequence of Unicode characters binary A sequence of bytes byte A signed 8-bit integer short A signed 16-bit integer int A signed 32-bit integer long A signed 64-bit integer ubyte An unsigned 8-bit integer ushort An unsigned 16-bit integer uint An unsigned 32-bit integer ulong An unsigned 64-bit integer float A 32-bit floating point number double A 64-bit floating point number array A sequence of values of a single type list A sequence of values of variable type map A mapping from distinct keys to values uuid A universally unique identifier symbol A 7-bit ASCII string from a constrained domain timestamp An absolute point in time Table 8.2. AMQ Ruby types before encoding and after decoding AMQP type AMQ Ruby type before encoding AMQ Ruby type after decoding null nil nil boolean true, false true, false char - String string String String binary - String byte - Integer short - Integer int - Integer long Integer Integer ubyte - Integer ushort - Integer uint - Integer ulong - Integer float - Float double Float Float array - Array list Array Array map Hash Hash symbol Symbol Symbol timestamp Date, Time Time Table 8.3. AMQ Ruby and other AMQ client types (1 of 2) AMQ Ruby type before encoding AMQ C++ type AMQ JavaScript type nil nullptr null true, false bool boolean String std::string string Integer int64_t number Float double number Array std::vector Array Hash std::map object Symbol proton::symbol string Date, Time proton::timestamp number Table 8.4. AMQ Ruby and other AMQ client types (2 of 2) AMQ Ruby type before encoding AMQ .NET type AMQ Python type nil null None true, false System.Boolean bool String System.String unicode Integer System.Int64 long Float System.Double float Array Amqp.List list Hash Amqp.Map dict Symbol Amqp.Symbol str Date, Time System.DateTime long 8.2. Interoperating with AMQ JMS AMQP defines a standard mapping to the JMS messaging model. This section discusses the various aspects of that mapping. For more information, see the AMQ JMS Interoperability chapter. JMS message types AMQ Ruby provides a single message type whose body type can vary. By contrast, the JMS API uses different message types to represent different kinds of data. The table below indicates how particular body types map to JMS message types. For more explicit control of the resulting JMS message type, you can set the x-opt-jms-msg-type message annotation. See the AMQ JMS Interoperability chapter for more information. Table 8.5. AMQ Ruby and JMS message types AMQ Ruby body type JMS message type String TextMessage nil TextMessage - BytesMessage Any other type ObjectMessage 8.3. Connecting to AMQ Broker AMQ Broker is designed to interoperate with AMQP 1.0 clients. Check the following to ensure the broker is configured for AMQP messaging: Port 5672 in the network firewall is open. The AMQ Broker AMQP acceptor is enabled. See Default acceptor settings . The necessary addresses are configured on the broker. See Addresses, Queues, and Topics . The broker is configured to permit access from your client, and the client is configured to send the required credentials. See Broker Security . 8.4. Connecting to AMQ Interconnect AMQ Interconnect works with any AMQP 1.0 client. Check the following to ensure the components are configured correctly: Port 5672 in the network firewall is open. The router is configured to permit access from your client, and the client is configured to send the required credentials. See Securing network connections . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_ruby_client/interoperability |
Chapter 2. BMCEventSubscription [metal3.io/v1alpha1] | Chapter 2. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description context string Arbitrary user-provided context for the event destination string A webhook URL to send events to hostName string A reference to a BareMetalHost httpHeadersRef object A secret containing HTTP headers which should be passed along to the Destination when making a request 2.1.2. .spec.httpHeadersRef Description A secret containing HTTP headers which should be passed along to the Destination when making a request Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 2.1.3. .status Description Type object Property Type Description error string subscriptionID string 2.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/bmceventsubscriptions GET : list objects of kind BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions DELETE : delete collection of BMCEventSubscription GET : list objects of kind BMCEventSubscription POST : create a BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} DELETE : delete a BMCEventSubscription GET : read the specified BMCEventSubscription PATCH : partially update the specified BMCEventSubscription PUT : replace the specified BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status GET : read status of the specified BMCEventSubscription PATCH : partially update status of the specified BMCEventSubscription PUT : replace status of the specified BMCEventSubscription 2.2.1. /apis/metal3.io/v1alpha1/bmceventsubscriptions Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind BMCEventSubscription Table 2.2. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty 2.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions Table 2.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of BMCEventSubscription Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BMCEventSubscription Table 2.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a BMCEventSubscription Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.11. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 202 - Accepted BMCEventSubscription schema 401 - Unauthorized Empty 2.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the BMCEventSubscription namespace string object name and auth scope, such as for teams and projects Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a BMCEventSubscription Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BMCEventSubscription Table 2.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.18. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BMCEventSubscription Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Patch schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BMCEventSubscription Table 2.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.23. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.24. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty 2.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status Table 2.25. Global path parameters Parameter Type Description name string name of the BMCEventSubscription namespace string object name and auth scope, such as for teams and projects Table 2.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified BMCEventSubscription Table 2.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.28. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BMCEventSubscription Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body Patch schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BMCEventSubscription Table 2.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.33. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.34. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/provisioning_apis/bmceventsubscription-metal3-io-v1alpha1 |
Chapter 13. The Apache HTTP Server | Chapter 13. The Apache HTTP Server The Apache HTTP Server provides an open-source HTTP server with the current HTTP standards. [14] In Red Hat Enterprise Linux, the httpd package provides the Apache HTTP Server. Enter the following command to see if the httpd package is installed: If it is not installed and you want to use the Apache HTTP Server, use the yum utility as the root user to install it: 13.1. The Apache HTTP Server and SELinux When SELinux is enabled, the Apache HTTP Server ( httpd ) runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates the httpd processes running in their own domain. This example assumes the httpd , setroubleshoot , setroubleshoot-server and policycoreutils-python packages are installed: Run the getenforce command to confirm SELinux is running in enforcing mode: The command returns Enforcing when SELinux is running in enforcing mode. Enter the following command as root to start httpd : Confirm that the service is running. The output should include the information below (only the time stamp will differ): To view the httpd processes, execute the following command: The SELinux context associated with the httpd processes is system_u:system_r:httpd_t:s0 . The second last part of the context, httpd_t , is the type. A type defines a domain for processes and a type for files. In this case, the httpd processes are running in the httpd_t domain. SELinux policy defines how processes running in confined domains (such as httpd_t ) interact with files, other processes, and the system in general. Files must be labeled correctly to allow httpd access to them. For example, httpd can read files labeled with the httpd_sys_content_t type, but cannot write to them, even if Linux (DAC) permissions allow write access. Booleans must be enabled to allow certain behavior, such as allowing scripts network access, allowing httpd access to NFS and CIFS volumes, and httpd being allowed to execute Common Gateway Interface (CGI) scripts. When the /etc/httpd/conf/httpd.conf file is configured so httpd listens on a port other than TCP ports 80, 443, 488, 8008, 8009, or 8443, the semanage port command must be used to add the new port number to SELinux policy configuration. The following example demonstrates configuring httpd to listen on a port that is not already defined in SELinux policy configuration for httpd , and, as a consequence, httpd failing to start. This example also demonstrates how to then configure the SELinux system to allow httpd to successfully listen on a non-standard port that is not already defined in the policy. This example assumes the httpd package is installed. Run each command in the example as the root user: Enter the following command to confirm httpd is not running: If the output differs, stop the process: Use the semanage utility to view the ports SELinux allows httpd to listen on: Edit the /etc/httpd/conf/httpd.conf file as root. Configure the Listen option so it lists a port that is not configured in SELinux policy configuration for httpd . In this example, httpd is configured to listen on port 12345: Enter the following command to start httpd : An SELinux denial message similar to the following is logged: For SELinux to allow httpd to listen on port 12345, as used in this example, the following command is required: Start httpd again and have it listen on the new port: Now that SELinux has been configured to allow httpd to listen on a non-standard port (TCP 12345 in this example), httpd starts successfully on this port. To prove that httpd is listening and communicating on TCP port 12345, open a telnet connection to the specified port and issue a HTTP GET command, as follows: [14] For more information, see the section named The Apache HTTP Sever in the System Administrator's Guide . | [
"~]USD rpm -q httpd package httpd is not installed",
"~]# yum install httpd",
"~]USD getenforce Enforcing",
"~]# systemctl start httpd.service",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Mon 2013-08-05 14:00:55 CEST; 8s ago",
"~]USD ps -eZ | grep httpd system_u:system_r:httpd_t:s0 19780 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19781 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19782 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19783 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19784 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19785 ? 00:00:00 httpd",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead)",
"~]# systemctl stop httpd.service",
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 80, 443, 488, 8008, 8009, 8443",
"Change this to Listen on specific IP addresses as shown below to prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # #Listen 12.34.56.78:80 Listen 127.0.0.1:12345",
"~]# systemctl start httpd.service Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details.",
"setroubleshoot: SELinux is preventing the httpd (httpd_t) from binding to port 12345. For complete SELinux messages. run sealert -l f18bca99-db64-4c16-9719-1db89f0d8c77",
"~]# semanage port -a -t http_port_t -p tcp 12345",
"~]# systemctl start httpd.service",
"~]# telnet localhost 12345 Trying 127.0.0.1 Connected to localhost. Escape character is '^]'. GET / HTTP/1.0 HTTP/1.1 200 OK Date: Wed, 02 Dec 2009 14:36:34 GMT Server: Apache/2.2.13 (Red Hat) Accept-Ranges: bytes Content-Length: 3985 Content-Type: text/html; charset=UTF-8 [...continues...]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-the_apache_http_server |
10.3. Removing a Guest Virtual Machine Entry | 10.3. Removing a Guest Virtual Machine Entry If the guest virtual machine is running, unregister the system, by running the following command in a terminal window as root on the guest: If the system has been deleted, however, the virtual service cannot tell whether the service is deleted or paused. In that case, you must manually remove the system from the server side, using the following steps: Login to the Subscription Manager The Subscription Manager is located on the Red Hat Customer Portal . Login to the Customer Portal using your user name and password, by clicking the login icon at the top of the screen. Click the Subscriptions tab Click the Subscriptions tab. Click the Systems link Scroll down the page and click the Systems link. Delete the system To delete the system profile, locate the specified system's profile in the table, select the check box beside its name and click Delete . | [
"subscription-manager unregister"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/virt-who-remove-guest |
function::proc_mem_data_pid | function::proc_mem_data_pid Name function::proc_mem_data_pid - Program data size (data + stack) in pages Synopsis Arguments pid The pid of process to examine Description Returns the given process data size (data + stack) in pages, or zero when the process doesn't exist or the number of pages couldn't be retrieved. | [
"proc_mem_data_pid:long(pid:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-proc-mem-data-pid |
Chapter 2. Installing the Integration Test Suite (tempest) | Chapter 2. Installing the Integration Test Suite (tempest) To manually install the Integration Test Suite, Installing the Integration Test Suite manually . 2.1. Prerequisites An undercloud installation. For more information, see Installing the undercloud . An overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . 2.2. Installing the Integration Test Suite manually If you do not want to install the Integration Test Suite (tempest) automatically with director, you can perform the installation manually later. You must ensure that you have a basic network configuration, install the Integration Test Suite packages, and create a configuration file that contains details about your OpenStack services and other testing behaviour switches. Procedure Ensure that the following networks are available within your Red Hat OpenStack Platform (RHOSP) environment: An external network that can provide a floating IP. A private network. Connect these networks through a router. To create the private network, specify the following options according to your network deployment: To create the public network, specify the following options according to your network deployment: Install the packages related to the Integration Test Suite: This command does not install any tempest plugins. You must install the plugins manually, depending on your RHOSP installation. Install the appropriate tempest plugin for each component in your environment. For example, enter the following command to install the keystone, neutron, cinder, and telemetry plugins: For a full list of packages, see Integration Test Suite packages . Note You can also install the openstack-tempest-all package. This package contains all of the tempest plugins. 2.2.1. Integration Test Suite packages Use dnf search to retrieve a list of tempest test packages: Component Package Name barbican python3-barbican-tests-tempest cinder python3-cinder-tests-tempest designate python3-designate-tests-tempest ec2-api python3-ec2api-tests-tempest heat python3-heat-tests-tempest ironic python3-ironic-tests-tempest keystone python3-keystone-tests-tempest kuryr python3-kuryr-tests-tempest manila python3-manila-tests-tempest mistral python3-mistral-tests-tempest networking-bgvpn python3-networking-bgpvpn-tests-tempest networking-l2gw python3-networking-l2gw-tests-tempest neutron python3-neutron-tests-tempest nova-join python3-novajoin-tests-tempest octavia python3-octavia-tests-tempest patrole python3-patrole-tests-tempest telemetry python3-telemetry-tests-tempest tripleo-common python3-tripleo-common-tests-tempest zaqar python3-zaqar-tests-tempest Note The python3-telemetry-tests-tempest package contains plugins for aodh, panko, gnocchi, and ceilometer tests. The python3-ironic-tests-tempest package contains plugins for ironic and ironic-inspector. | [
"openstack network create <network_name> --share openstack subnet create <subnet_name> --subnet-range <address/prefix> --network <network_name> openstack router create <router_name> openstack router add subnet <router_name> <subnet_name>",
"openstack network create <network_name> --external --provider-network-type flat --provider-physical-network datacentre openstack subnet create <subnet_name> --subnet-range <address/prefix> --gateway <default_gateway> --no-dhcp --network <network_name> openstack router set <router_name> --external-gateway <public_network_name>",
"sudo dnf -y install openstack-tempest",
"sudo dnf install python3-keystone-tests-tempest python3-neutron-tests-tempest python3-cinder-tests-tempest python3-telemetry-tests-tempest",
"sudo dnf search USD(openstack service list -c Name -f value) 2>/dev/null | grep test | awk '{print USD1}'"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/validating_your_cloud_with_the_red_hat_openstack_platform_integration_test_suite/assembly_installing-the-integration-test-suite-tempest_tempest |
function::usymname | function::usymname Name function::usymname - Return the symbol of an address in the current task. EXPERIMENTAL! Synopsis Arguments addr The address to translate. Description Returns the (function) symbol name associated with the given address if known. If not known it will return the hex string representation of addr. | [
"function usymname:string(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-usymname |
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack | Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack Before you install a OpenShift Container Platform cluster that uses single-root I/O virtualization (SR-IOV) or Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you must understand the requirements for each technology and then perform preparatory tasks. 2.1. Requirements for clusters on RHOSP that use either SR-IOV or OVS-DPDK If you use SR-IOV or OVS-DPDK with your deployment, you must meet the following requirements: RHOSP compute nodes must use a flavor that supports huge pages. 2.1.1. Requirements for clusters on RHOSP that use SR-IOV To use single-root I/O virtualization (SR-IOV) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) SR-IOV deployment . OpenShift Container Platform must support the NICs that you use. For a list of supported NICs, see "About Single Root I/O Virtualization (SR-IOV) hardware networks" in the "Hardware networks" subsection of the "Networking" documentation. For each node that will have an attached SR-IOV NIC, your RHOSP cluster must have: One instance from the RHOSP quota One port attached to the machines subnet One port for each SR-IOV Virtual Function A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 2.1.2. Requirements for clusters on RHOSP that use OVS-DPDK To use Open vSwitch with the Data Plane Development Kit (OVS-DPDK) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. 2.2. Preparing to install a cluster that uses SR-IOV You must configure RHOSP before you install a cluster that uses SR-IOV on it. When installing a cluster using SR-IOV, you must deploy clusters using cgroup v1. For more information, Enabling Linux control group version 1 (cgroup v1) . Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 2.2.1. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 2.3. Preparing to install a cluster that uses OVS-DPDK You must configure RHOSP before you install a cluster that uses SR-IOV on it. Complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. After you perform preinstallation tasks, install your cluster by following the most relevant OpenShift Container Platform on RHOSP installation instructions. Then, perform the tasks under " steps" on this page. 2.4. steps For either type of deployment: Configure the Node Tuning Operator with huge pages support . To complete SR-IOV configuration after you deploy your cluster: Install the SR-IOV Operator . Configure your SR-IOV network device . Create SR-IOV compute machines . Consult the following references after you deploy your cluster to improve its performance: A test pod template for clusters that use OVS-DPDK on OpenStack . A test pod template for clusters that use SR-IOV on OpenStack . A performance profile template for clusters that use OVS-DPDK on OpenStack | [
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_openstack/installing-openstack-nfv-preparing |
Chapter 7. Deprecated functionality | Chapter 7. Deprecated functionality Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 9. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 7.1. Installer and image creation Deprecated Kickstart commands The following Kickstart commands have been deprecated: timezone --ntpservers timezone --nontp logging --level %packages --excludeWeakdeps %packages --instLangs %anaconda pwpolicy Note that where only specific options are listed, the base command and its other options are still available and not deprecated. Using the deprecated commands in Kickstart files prints a warning in the logs. You can turn the deprecated command warnings into errors with the inst.ksstrict boot option. (BZ#1899167) 7.2. Shells and command-line tools Setting the TMPDIR variable in the ReaR configuration file is deprecated Setting the TMPDIR environment variable in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file), by using a statement such as export TMPDIR=... , does not work and is deprecated. To specify a custom directory for ReaR temporary files, export the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=... statement and then execute the rear command in the same shell session or script. Jira:RHELDOCS-18049 7.3. Security SHA-1 is deprecated for cryptographic purposes The usage of the SHA-1 message digest for cryptographic purposes has been deprecated in RHEL 9. The digest produced by SHA-1 is not considered secure because of many documented successful attacks based on finding hash collisions. The RHEL core crypto components no longer create signatures using SHA-1 by default. Applications in RHEL 9 have been updated to avoid using SHA-1 in security-relevant use cases. Among the exceptions, the HMAC-SHA1 message authentication code and the Universal Unique Identifier (UUID) values can still be created using SHA-1 because these use cases do not currently pose security risks. SHA-1 also can be used in limited cases connected with important interoperability and compatibility concerns, such as Kerberos and WPA-2. See the List of RHEL applications using cryptography that is not compliant with FIPS 140-3 section in the RHEL 9 Security hardening document for more details. If your scenario requires the use of SHA-1 for verifying existing or third-party cryptographic signatures, you can enable it by entering the following command: Alternatively, you can switch the system-wide crypto policies to the LEGACY policy. Note that LEGACY also enables many other algorithms that are not secure. (JIRA:RHELPLAN-110763) SCP is deprecated in RHEL 9 The secure copy protocol (SCP) is deprecated because it has known security vulnerabilities. The SCP API remains available for the RHEL 9 lifecycle but using it reduces system security. In the scp utility, SCP is replaced by the SSH File Transfer Protocol (SFTP) by default. The OpenSSH suite does not use SCP in RHEL 9. SCP is deprecated in the libssh library. (JIRA:RHELPLAN-99136) Digest-MD5 in SASL is deprecated The Digest-MD5 authentication mechanism in the Simple Authentication Security Layer (SASL) framework is deprecated, and it might be removed from the cyrus-sasl packages in a future major release. (BZ#1995600) OpenSSL deprecates MD2, MD4, MDC2, Whirlpool, RIPEMD160, Blowfish, CAST, DES, IDEA, RC2, RC4, RC5, SEED, and PBKDF1 The OpenSSL project has deprecated a set of cryptographic algorithms because they are insecure, uncommonly used, or both. Red Hat also discourages the use of those algorithms, and RHEL 9 provides them for migrating encrypted data to use new algorithms. Users must not depend on those algorithms for the security of their systems. The implementations of the following algorithms have been moved to the legacy provider in OpenSSL: MD2, MD4, MDC2, Whirlpool, RIPEMD160, Blowfish, CAST, DES, IDEA, RC2, RC4, RC5, SEED, and PBKDF1. See the /etc/pki/tls/openssl.cnf configuration file for instructions on how to load the legacy provider and enable support for the deprecated algorithms. ( BZ#1975836 ) /etc/system-fips is now deprecated Support for indicating FIPS mode through the /etc/system-fips file has been removed, and the file will not be included in future versions of RHEL. To install RHEL in FIPS mode, add the fips=1 parameter to the kernel command line during the system installation. You can check whether RHEL operates in FIPS mode by using the fips-mode-setup --check command. (JIRA:RHELPLAN-103232) libcrypt.so.1 is now deprecated The libcrypt.so.1 library is now deprecated, and it might be removed in a future version of RHEL. ( BZ#2034569 ) fapolicyd.rules is deprecated The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules replaces the /etc/fapolicyd/fapolicyd.rules file. The fagenrules script now merges all component rule files in this directory to the /etc/fapolicyd/compiled.rules file. Rules in /etc/fapolicyd/fapolicyd.trust are still processed by the fapolicyd framework but only for ensuring backward compatibility. ( BZ#2054740 ) 7.4. Networking ipset and iptables-nft have been deprecated The ipset and iptables-nft packages have been deprecated in RHEL. The iptables-nft package contains different tools such as iptables , ip6tables , ebtables and arptables . These tools will no longer receive new features and using them for new deployments is not recommended. As a replacement, prefer using the nft command-line tool provided by the nftables package. Existing setups should migrate to nft if possible. When you load the iptables , ip6tables , ebtables , arptables , nft_compat , or ipset module, the module logs the following warning to the /var/log/messages file: For more information on migrating to nftables, see Migrating from iptables to nftables , as well as the iptables-translate(8) and ip6tables-translate(8) man pages. ( BZ#1945151 ) Network teams are deprecated in RHEL 9 The teamd service and the libteam library are deprecated in Red Hat Enterprise Linux 9 and will be removed in the major release. As a replacement, configure a bond instead of a network team. Red Hat focuses its efforts on kernel-based bonding to avoid maintaining two features, bonds and teams, that have similar functions. The bonding code has a high customer adoption, is robust, and has an active community development. As a result, the bonding code receives enhancements and updates. For details about how to migrate a team to a bond, see Migrating a network team configuration to network bond . (BZ#1935544) NetworkManager connection profiles in ifcfg format are deprecated In RHEL 9.0 and later, connection profiles in ifcfg format are deprecated. The major RHEL release will remove the support for this format. However, in RHEL 9, NetworkManager still processes and updates existing profiles in this format if you modify them. By default, NetworkManager now stores connection profiles in keyfile format in the /etc/NetworkManager/system-connections/ directory. Unlike the ifcfg format, the keyfile format supports all connection settings that NetworkManager provides. For further details about the keyfile format and how to migrate profiles, see NetworkManager connection profiles in keyfile format . (BZ#1894877) The iptables back end in firewalld is deprecated In RHEL 9, the iptables framework is deprecated. As a consequence, the iptables backend and the direct interface in firewalld are also deprecated. Instead of the direct interface you can use the native features in firewalld to configure the required rules. ( BZ#2089200 ) 7.5. Kernel ATM encapsulation is deprecated in RHEL 9 Asynchronous Transfer Mode (ATM) encapsulation enables Layer-2 (Point-to-Point Protocol, Ethernet) or Layer-3 (IP) connectivity for the ATM Adaptation Layer 5 (AAL-5). Red Hat has not been providing support for ATM NIC drivers since RHEL 7. The support for ATM implementation is being dropped in RHEL 9. These protocols are currently used only in chipsets, which support the ADSL technology and are being phased out by manufacturers. Therefore, ATM encapsulation is deprecated in Red Hat Enterprise Linux 9. For more information, see PPP Over AAL5 , Multiprotocol Encapsulation over ATM Adaptation Layer 5 , and Classical IP and ARP over ATM . ( BZ#2058153 ) v4l/dvb television and video capture devices are no longer supported With RHEL 9, Red Hat no longer supports Video4Linux ( v4l ) and Linux DVB ( DVB ) devices that consist of various television tuner cards and miscellaneous video capture cards and Red Hat no longer provides their associated drivers. ( BZ#2074598 ) 7.6. File systems and storage lvm2-activation-generator and its generated services removed in RHEL 9.0 The lvm2-activation-generator program and its generated services lvm2-activation , lvm2-activation-early , and lvm2-activation-net are removed in RHEL 9.0. The lvm.conf event_activation setting, used to activate the services, is no longer functional. The only method for auto activating volume groups is event based activation. ( BZ#2038183 ) 7.7. Dynamic programming languages, web and database servers libdb has been deprecated RHEL 8 and RHEL 9 currently provide Berkeley DB ( libdb ) version 5.3.28, which is distributed under the LGPLv2 license. The upstream Berkeley DB version 6 is available under the AGPLv3 license, which is more restrictive. The libdb package is deprecated as of RHEL 9 and might not be available in future major RHEL releases. In addition, cryptographic algorithms have been removed from libdb in RHEL 9 and multiple libdb dependencies have been removed from RHEL 9. Users of libdb are advised to migrate to a different key-value database. For more information, see the Knowledgebase article Available replacements for the deprecated Berkeley DB (libdb) in RHEL . (BZ#1927780, BZ#1974657 , JIRA:RHELPLAN-80695) 7.8. Identity Management SHA-1 in OpenDNSSec is now deprecated OpenDNSSec supports exporting Digital Signatures and authentication records using the SHA-1 algorithm. The use of the SHA-1 algorithm is no longer supported. With the RHEL 9 release, SHA-1 in OpenDNSSec is deprecated and it might be removed in a future minor release. Additionally, OpenDNSSec support is limited to its integration with Red Hat Identity Management. OpenDNSSec is not supported standalone. ( BZ#1979521 ) The SSSD implicit files provider domain is disabled by default The SSSD implicit files provider domain, which retrieves user information from local files such as /etc/shadow and group information from /etc/groups , is now disabled by default. To retrieve user and group information from local files with SSSD: Configure SSSD. Choose one of the following options: Explicitly configure a local domain with the id_provider=files option in the sssd.conf configuration file. Enable the files provider by setting enable_files_domain=true in the sssd.conf configuration file. Configure the name services switch. (JIRA:RHELPLAN-100639) The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. Jira:RHELDOCS-16612 7.9. Graphics infrastructures X.org Server is now deprecated The X.org display server is deprecated, and will be removed in a future major RHEL release. The default desktop session is now the Wayland session in most cases. The X11 protocol remains fully supported using the XWayland back end. As a result, applications that require X11 can run in the Wayland session. Red Hat is working on resolving the remaining problems and gaps in the Wayland session. For the outstanding problems in Wayland , see the Known issues section. You can switch your user session back to the X.org back end. For more information, see Selecting GNOME environment and display protocol . (JIRA:RHELPLAN-121048) Motif has been deprecated The Motif widget toolkit has been deprecated in RHEL, because development in the upstream Motif community is inactive. The following Motif packages have been deprecated, including their development and debugging variants: motif openmotif openmotif21 openmotif22 Additionally, the motif-static package has been removed. Red Hat recommends using the GTK toolkit as a replacement. GTK is more maintainable and provides new features compared to Motif. (JIRA:RHELPLAN-98983) 7.10. Red Hat Enterprise Linux system roles The networking system role displays a deprecation warning when configuring teams on RHEL 9 nodes The network teaming capabilities have been deprecated in RHEL 9. As a result, using the networking RHEL system role on an RHEL 8 controller to configure a network team on RHEL 9 nodes, shows a warning about its deprecation. ( BZ#1999770 ) 7.11. Virtualization SecureBoot image verification using SHA1-based signatures is deprecated Performing SecureBoot image verification using SHA1-based signatures on UEFI (PE/COFF) executables has become deprecated. Instead, Red Hat recommends using signatures based on the SHA2 algorithm, or later. (BZ#1935497) Limited support for virtual machine snapshots Creating snapshots of virtual machines (VMs) is currently only supported for VMs not using the UEFI firmware. In addition, during the snapshot operation, the QEMU monitor may become blocked, which negatively impacts the hypervisor performance for certain workloads. Also note that the current mechanism of creating VM snapshots has been deprecated, and Red Hat does not recommend using VM snapshots in a production environment. However, a new VM snapshot mechanism is under development and is planned to be fully implemented in a future minor release of RHEL 9. (JIRA:RHELPLAN-15509, BZ#1621944) virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager may not be yet available in the RHEL web console. (JIRA:RHELPLAN-10304) libvirtd has become deprecated The monolithic libvirt daemon, libvirtd , has been deprecated in RHEL 9, and will be removed in a future major release of RHEL. Note that you can still use libvirtd for managing virtualization on your hypervisor, but Red Hat recommends switching to the newly introduced modular libvirt daemons. For instructions and details, see the RHEL 9 Configuring and Managing Virtualization document. (JIRA:RHELPLAN-113995) The virtual floppy driver has become deprecated The isa-fdc driver, which controls virtual floppy disk devices, is now deprecated, and will become unsupported in a future release of RHEL. Therefore, to ensure forward compatibility with migrated virtual machines (VMs), Red Hat discourages using floppy disk devices in VMs hosted on RHEL 9. ( BZ#1965079 ) qcow2-v2 image format is deprecated With RHEL 9, the qcow2-v2 format for virtual disk images has become deprecated, and will become unsupported in a future major release of RHEL. In addition, the RHEL 9 Image Builder cannot create disk images in the qcow2-v2 format. Instead of qcow2-v2, Red Hat strongly recommends using qcow2-v3. To convert a qcow2-v2 image to a later format version, use the qemu-img amend command. ( BZ#1951814 ) 7.12. Containers Running RHEL 9 containers on a RHEL 7 host is not supported Running RHEL 9 containers on a RHEL 7 host is not supported. It might work, but it is not guaranteed. For more information, see Red Hat Enterprise Linux Container Compatibility Matrix . (JIRA:RHELPLAN-100087) SHA1 hash algorithm within Podman has been deprecated The SHA1 algorithm used to generate the filename of the rootless network namespace is no longer supported in Podman. Therefore, rootless containers started before updating to Podman 4.1.1 from the RHBA-2022:5951 advisory have to be restarted if they are joined to a network (and not just using slirp4netns ) to ensure they can connect to containers started after the upgrade. (BZ#2069279) rhel9/pause has been deprecated The rhel9/pause container image has been deprecated. ( BZ#2106816 ) 7.13. Deprecated packages This section lists packages that have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux. For changes to packages between RHEL 8 and RHEL 9, see Changes to packages in the Considerations in adopting RHEL 9 document. Important The support status of deprecated packages remains unchanged within RHEL 9. For more information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . The following packages have been deprecated in RHEL 9: iptables-devel iptables-libs iptables-nft iptables-nft-services iptables-utils libdb mcpp python3-pytz | [
"update-crypto-policies --set DEFAULT:SHA1",
"Warning: <module_name> - this driver is not recommended for new deployments. It continues to be supported in this RHEL release, but it is likely to be removed in the next major release. Driver updates and fixes will be limited to critical issues. Please contact Red Hat Support for additional information.",
"[domain/local] id_provider=files",
"[sssd] enable_files_domain = true",
"authselect enable-feature with-files-provider"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.0_release_notes/deprecated_functionality |
Chapter 3. Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator | Chapter 3. Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator 3.1. Prerequisites Before you install the Operator and use it to create a broker deployment, you should consult the Operator deployment notes in Section 2.7, "Operator deployment notes" . 3.2. Installing the Operator using the CLI Note Each Operator release requires that you download the latest AMQ Broker 7.11.7 Operator Installation and Example Files as described below. The procedures in this section show how to use the OpenShift command-line interface (CLI) to install and deploy the latest version of the Operator for AMQ Broker 7.11 in a given OpenShift project. In subsequent procedures, you use this Operator to deploy some broker instances. For an alternative method of installing the AMQ Broker Operator that uses the OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . To learn about upgrading existing Operator-based broker deployments, see Chapter 6, Upgrading an Operator-based broker deployment . 3.2.1. Preparing to deploy the Operator Before you deploy the Operator using the CLI, you must download the Operator installation files and prepare the deployment. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.11.7 releases . Ensure that the value of the Version drop-down list is set to 7.11.7 and the Patches tab is selected. to the latest AMQ Broker 7.11.7 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.11.7-ocp-install-examples.zip compressed archive automatically begins. Move the archive to your chosen directory. The following example moves the archive to a directory called ~/broker/operator . USD mkdir ~/broker/operator USD mv amq-broker-operator-7.11.7-ocp-install-examples.zip ~/broker/operator In your chosen directory, extract the contents of the archive. For example: USD cd ~/broker/operator USD unzip amq-broker-operator-7.11.7-ocp-install-examples.zip Switch to the directory that was created when you extracted the archive. For example: USD cd amq-broker-operator-7.11.7-ocp-install-examples Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Specify the project in which you want to install the Operator. You can create a new project or switch to an existing one. Create a new project: USD oc new-project <project_name> Or, switch to an existing project: USD oc project <project_name> Specify a service account to use with the Operator. In the deploy directory of the Operator archive that you extracted, open the service_account.yaml file. Ensure that the kind element is set to ServiceAccount . If you want to change the default service account name, in the metadata section, replace amq-broker-controller-manager with a custom name. Create the service account in your project. USD oc create -f deploy/service_account.yaml Specify a role name for the Operator. Open the role.yaml file. This file specifies the resources that the Operator can use and modify. Ensure that the kind element is set to Role . If you want to change the default role name, in the metadata section, replace amq-broker-operator-role with a custom name. Create the role in your project. USD oc create -f deploy/role.yaml Specify a role binding for the Operator. The role binding binds the previously-created service account to the Operator role, based on the names you specified. Open the role_binding.yaml file. Ensure that the name values for ServiceAccount and Role match those specified in the service_account.yaml and role.yaml files. For example: metadata: name: amq-broker-operator-rolebinding subjects: kind: ServiceAccount name: amq-broker-controller-manager roleRef: kind: Role name: amq-broker-operator-role Create the role binding in your project. USD oc create -f deploy/role_binding.yaml Specify a leader election role binding for the Operator. The role binding binds the previously-created service account to the leader election role, based on the names you specified. Create a leader election role for the Operator. USD oc create -f deploy/election_role.yaml Create the leader election role binding in your project. USD oc create -f deploy/election_role_binding.yaml (Optional) If you want the Operator to watch multiple namespaces, complete the following steps: Note If the OpenShift Container Platform cluster already contains installed Operators for AMQ Broker, you must ensure the new Operator does not watch any of the same namespaces as existing Operators. For information on how to identify the namespaces that are watched by existing Operators, see, Identifying namespaces watched by existing Operators . In the deploy directory of the Operator archive that you downloaded and extracted, open the operator_yaml file. If you want the Operator to watch all namespaces in the cluster, in the WATCH_NAMESPACE section, add a value attribute and set the value to an asterisk. Comment out the existing attributes in the WATCH_NAMESPACE section. For example: - name: WATCH_NAMESPACE value: "*" # valueFrom: # fieldRef: # fieldPath: metadata.namespace Note To avoid conflicts, ensure that multiple Operators do not watch the same namespace. For example, if you deploy an Operator to watch all namespaces on the cluster, you cannot deploy another Operator to watch individual namespaces. If Operators are already deployed on the cluster, you can specify a list of namespaces that the new Operator watches, as described in the following step. If you want the Operator to watch multiple, but not all, namespaces on the cluster, in the WATCH_NAMESPACE section, specify a list of namespaces. Ensure that you exclude any namespaces that are watched by existing Operators. For example: - name: WATCH_NAMESPACE value: "namespace1, namespace2"`. In the deploy directory of the Operator archive that you downloaded and extracted, open the cluster_role_binding.yaml file. In the Subjects section, specify a namespace that corresponds to the OpenShift Container Platform project to which you are deploying the Operator. For example: Subjects: - kind: ServiceAccount name: amq-broker-controller-manager namespace: operator-project Note If you previously deployed brokers using an earlier version of the Operator, and you want to deploy the Operator to watch multiple namespaces, see Before you upgrade . Create a cluster role in your project. USD oc create -f deploy/cluster_role.yaml Create a cluster role binding in your project. USD oc create -f deploy/cluster_role_binding.yaml In the procedure that follows, you deploy the Operator in your project. 3.2.2. Deploying the Operator using the CLI The procedure in this section shows how to use the OpenShift command-line interface (CLI) to deploy the latest version of the Operator for AMQ Broker 7.11 in your OpenShift project. Prerequisites You must have already prepared your OpenShift project for the Operator deployment. See Section 3.2.1, "Preparing to deploy the Operator" . Starting in AMQ Broker 7.3, you use a new version of the Red Hat Ecosystem Catalog to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication . If you intend to deploy brokers with persistent storage and do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that they are available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage (that is, by setting persistenceEnabled=true in your Custom Resource), you need to have two PVs available. By default, each broker instance requires storage of 2 GiB. If you specify persistenceEnabled=false in your Custom Resource, the deployed brokers uses ephemeral storage. Ephemeral storage means that that every time you restart the broker Pods, any existing data is lost. For more information about provisioning persistent storage, see: Understanding persistent storage Procedure In the OpenShift command-line interface (CLI), log in to OpenShift as a cluster administrator. For example: USD oc login -u system:admin Switch to the project that you previously prepared for the Operator deployment. For example: USD oc project <project_name> Switch to the directory that was created when you previously extracted the Operator installation archive. For example: USD cd ~/broker/operator/amq-broker-operator-7.11.7-ocp-install-examples Deploy the CRDs that are included with the Operator. You must install the CRDs in your OpenShift cluster before deploying and starting the Operator. Deploy the main broker CRD. USD oc create -f deploy/crds/broker_activemqartemis_crd.yaml Deploy the address CRD. USD oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml Deploy the scaledown controller CRD. USD oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml Deploy the security CRD: USD oc create -f deploy/crds/broker_activemqartemissecurity_crd.yaml Link the pull secret associated with the account used for authentication in the Red Hat Ecosystem Catalog with the default , deployer , and builder service accounts for your OpenShift project. USD oc secrets link --for=pull default <secret_name> USD oc secrets link --for=pull deployer <secret_name> USD oc secrets link --for=pull builder <secret_name> In the deploy directory of the Operator archive that you downloaded and extracted, open the operator.yaml file. Ensure that the value of the spec.containers.image property corresponds to version 7.11.7-opr-2 of the Operator, as shown below. spec: template: spec: containers: #image: registry.redhat.io/amq7/amq-broker-rhel8-operator:7.10 image: registry.redhat.io/amq7/amq-broker-rhel8-operator@sha256:d5c10ee9a342d7fd59bdd3cfae25029548f5081c985c9bdf5656dbd5e8877e0e Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Deploy the Operator. USD oc create -f deploy/operator.yaml In your OpenShift project, the Operator starts in a new Pod. In the OpenShift Container Platform web console, the information on the Events tab of the Operator Pod confirms that OpenShift has deployed the Operator image that you specified, has assigned a new container to a node in your OpenShift cluster, and has started the new container. In addition, if you click the Logs tab within the Pod, the output should include lines resembling the following: The preceding output confirms that the newly-deployed Operator is communicating with Kubernetes, that the controllers for the broker and addressing are running, and that these controllers have started some workers. Note It is recommended that you deploy only a single instance of the AMQ Broker Operator in a given OpenShift project. Setting the spec.replicas property of your Operator deployment to a value greater than 1 , or deploying the Operator more than once in the same project is not recommended. Additional resources For an alternative method of installing the AMQ Broker Operator that uses the OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . 3.3. Installing the Operator using OperatorHub 3.3.1. Overview of the Operator Lifecycle Manager In OpenShift Container Platform 4.5 and later, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the lifecycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes-native applications (Operators) in an effective, automated, and scalable way. The OLM runs by default in OpenShift Container Platform 4.5 and later, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators using the OLM. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments. When you have deployed the Operator, you can use Custom Resource (CR) instances to create broker deployments such as standalone and clustered brokers. 3.3.2. Deploying the Operator from OperatorHub This procedure shows how to use OperatorHub to deploy the latest version of the Operator for AMQ Broker to a specified OpenShift project. Note In OperatorHub, you can install only the latest Operator version that is provided in each channel. If you want to install an earlier version of an Operator, you must install the Operator by using the CLI. For more information, see Section 3.2, "Installing the Operator using the CLI" . Prerequisites The Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator must be available in OperatorHub. You have cluster administrator privileges. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. In left navigation menu, click Operators OperatorHub . On the Project drop-down menu at the top of the OperatorHub page, select the project in which you want to deploy the Operator. On the OperatorHub page, use the Filter by keyword... box to find the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. Note In OperatorHub, you might find more than one Operator than includes AMQ Broker in its name. Ensure that you click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. When you click this Operator, review the information pane that opens. For AMQ Broker 7.11, the latest minor version tag of this Operator is 7.11.7-opr-2 . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. On the dialog box that appears, click Install . On the Install Operator page: Under Update Channel , select the 7.11.x channel to receive updates for version 7.11 only. The 7.11.x channel is a Long Term Support (LTS) channel. Depending on when your OpenShift Container Platform cluster was installed, you may also see channels for older versions of AMQ Broker. The only other supported channel is 7.10.x , which is also an LTS channel. Under Installation Mode , choose which namespaces the Operator watches: A specific namespace on the cluster - The Operator is installed in that namespace and only monitors that namespace for CR changes. All namespaces - The Operator monitors all namespaces for CR changes. Note If you previously deployed brokers using an earlier version of the Operator, and you want deploy the Operator to watch many namespaces, see Before you upgrade . From the Installed Namespace drop-down menu, select the project in which you want to install the Operator. Under Approval Strategy , ensure that the radio button entitled Automatic is selected. This option specifies that updates to the Operator do not require manual approval for installation to take place. Click Install . When the Operator installation is complete, the Installed Operators page opens. You should see that the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator is installed in the project namespace that you specified. Additional resources To learn how to create a broker deployment in a project that has the Operator for AMQ Broker installed, see Section 3.4.1, "Deploying a basic broker instance" . 3.4. Creating Operator-based broker deployments 3.4.1. Deploying a basic broker instance The following procedure shows how to use a Custom Resource (CR) instance to create a basic broker deployment. Note While you can create more than one broker deployment in a given OpenShift project by deploying multiple Custom Resource (CR) instances, typically, you create a single broker deployment in a project, and then deploy multiple CR instances for addresses. Red Hat recommends you create broker deployments in separate projects. In AMQ Broker 7.11, if you want to configure the following items, you must add the appropriate configuration to the main broker CR instance before deploying the CR for the first time. The size and storage class of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage Limits and requests for memory and CPU for each broker in a deployment Prerequisites You must have already installed the AMQ Broker Operator. To use the OpenShift command-line interface (CLI) to install the AMQ Broker Operator, see Section 3.2, "Installing the Operator using the CLI" . To use the OperatorHub graphical interface to install the AMQ Broker Operator, see Section 3.3, "Installing the Operator using OperatorHub" . You should understand how the Operator chooses a broker container image to use for your broker deployment. For more information, see Section 2.6, "How the Operator chooses container images" . Starting in AMQ Broker 7.3, you use a new version of the Red Hat Ecosystem Catalog to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication . Procedure When you have successfully installed the Operator, the Operator is running and listening for changes related to your CRs. This example procedure shows how to use a CR instance to deploy a basic broker in your project. Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.6, "How the Operator chooses container images" . Note The broker_activemqartemis_cr.yaml sample CR uses a naming convention of ex-aao . This naming convention denotes that the CR is an example resource for the AMQ Broker Operator . AMQ Broker is based on the ActiveMQ Artemis project. When you deploy this sample CR, the resulting StatefulSet uses the name ex-aao-ss . Furthermore, broker Pods in the deployment are directly based on the StatefulSet name, for example, ex-aao-ss-0 , ex-aao-ss-1 , and so on. The application name in the CR appears in the deployment as a label on the StatefulSet. You might use this label in a Pod selector, for example. The size property specifies the number of brokers to deploy. A value of 2 or greater specifies a clustered broker deployment. However, to deploy a single broker instance, ensure that the value is set to 1 . Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . In the OpenShift Container Platform web console, click Workloads StatefulSets . You see a new StatefulSet called ex-aao-ss . Click the ex-aao-ss StatefulSet. You see that there is one Pod, corresponding to the single broker that you defined in the CR. Within the StatefulSet, click the Pods tab. Click the ex-aao-ss Pod. On the Events tab of the running Pod, you see that the broker container has started. The Logs tab shows that the broker itself is running. To test that the broker is running normally, access a shell on the broker Pod to send some test messages. Using the OpenShift Container Platform web console: Click Workloads Pods . Click the ex-aao-ss Pod. Click the Terminal tab. Using the OpenShift command-line interface: Get the Pod names and internal IP addresses for your project. Access the shell for the broker Pod. From the shell, use the artemis command to send some test messages. Specify the internal IP address of the broker Pod in the URL. For example: The preceding command automatically creates a queue called demoQueue on the broker and sends a default quantity of 1000 messages to the queue. You should see output that resembles the following: Additional resources For a complete configuration reference for the main broker Custom Resource (CR), see Section 8.1, "Custom Resource configuration reference" . To learn how to connect a running broker to AMQ Management Console, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment . 3.4.2. Deploying clustered brokers If there are two or more broker Pods running in your project, the Pods automatically form a broker cluster. A clustered configuration enables brokers to connect to each other and redistribute messages as needed, for load balancing. The following procedure shows you how to deploy clustered brokers. By default, the brokers in this deployment use on demand load balancing, meaning that brokers will forward messages only to other brokers that have matching consumers. Prerequisites A basic broker instance is already deployed. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Open the CR file that you used for your basic broker deployment. For a clustered deployment, ensure that the value of deploymentPlan.size is 2 or greater. For example: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 4 image: placeholder ... Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Save the modified CR file. Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you previously created your basic broker deployment. Switch to the project in which you previously created your basic broker deployment. At the command line, apply the change: USD oc apply -f <path/to/custom_resource_instance> .yaml In the OpenShift Container Platform web console, additional broker Pods starts in your project, according to the number specified in your CR. By default, the brokers running in the project are clustered. Open the Logs tab of each Pod. The logs show that OpenShift has established a cluster connection bridge on each broker. Specifically, the log output includes a line like the following: 3.4.3. Applying Custom Resource changes to running broker deployments The following are some important things to note about applying Custom Resource (CR) changes to running broker deployments: You cannot dynamically update the persistenceEnabled attribute in your CR. To change this attribute, scale your cluster down to zero brokers. Delete the existing CR. Then, recreate and redeploy the CR with your changes, also specifying a deployment size. As described in Section 3.2.2, "Deploying the Operator using the CLI" , if you create a broker deployment with persistent storage (that is, by setting persistenceEnabled=true in your CR), you might need to provision Persistent Volumes (PVs) for the AMQ Broker Operator to claim for your broker Pods. If you scale down the size of your broker deployment, the Operator releases any PVs that it previously claimed for the broker Pods that are now shut down. However, if you remove your broker deployment by deleting your CR, AMQ Broker Operator does not release Persistent Volume Claims (PVCs) for any broker Pods that are still in the deployment when you remove it. In addition, these unreleased PVs are unavailable to any new deployment. In this case, you need to manually release the volumes. For more information, see Release a persistent volume in the OpenShift documentation. In AMQ Broker 7.11, if you want to configure the following items, you must add the appropriate configuration to the main CR instance before deploying the CR for the first time. The size and storage class of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage . Limits and requests for memory and CPU for each broker in a deployment . During an active scaling event, any further changes that you apply are queued by the Operator and executed only when scaling is complete. For example, suppose that you scale the size of your deployment down from four brokers to one. Then, while scaledown is taking place, you also change the values of the broker administrator user name and password. In this case, the Operator queues the user name and password changes until the deployment is running with one active broker. All CR changes - apart from changing the size of your deployment, or changing the value of the expose attribute for acceptors, connectors, or the console - cause existing brokers to be restarted. If you have multiple brokers in your deployment, only one broker restarts at a time. 3.5. Changing the logging level for the Operator The default logging level for AMQ Broker Operator is info , which logs information and error messages. You can change the default logging level to increase or decrease the detail that is written to the Operator logs. If you use the OpenShift Container Platform command-line interface to install the Operator, you can set the new logging level in the Operator configuration file, operator.yaml , either before or after you install the Operator. If you use Operator Hub, you can use the OpenShift Container Platform web console to set the logging level in the Operator subscription after you install the Operator. The other available logging levels for the Operator are: error Writes error messages only to the log. debug Write all messages to the log including debugging messages. Procedure Using the OpenShift Container Platform command-line interface: Log in as a cluster administrator. For example: USD oc login -u system:admin If the Operator is not installed, complete the following steps to change the logging level. In the deploy directory of the Operator archive that you downloaded and extracted, open the operator.yaml file. Change the value of the zap-log-level attribute to debug or error . For example: apiVersion: apps/v1 kind: Deployment metadata: labels: control-plane: controller-manager name: amq-broker-controller-manager spec: containers: - args: - --zap-log-level=error ... Save the operator.yaml file. Install the Operator. If the Operator is already installed, use the sed command to change the logging level in the deploy/operator.yaml file and redeploy the Operator. For example, the following command changes the logging level from info to error and redeploys the Operator: USD sed 's/--zap-log-level=info/--zap-log-level=error/' deploy/operator.yaml | oc apply -f - Using the OpenShift Container Platform web console: Log in to the OpenShift Container Platform as a cluster administrator. In the left pane, click Operators Installed Operators . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) Operator. Click the Subscriptions tab. Click Actions . Click Edit Subscription . Click the YAML tab. Within the console, a YAML editor opens, enabling you to edit the subscription. In the config element, add an environment variable called ARGS and specify a logging level of info , debug or error . In the following example, an ARGS environment variable that specifies a logging level of debug is passed to the Operator container. apiVersion: operators.coreos.com/v1alpha1 kind: Subscription spec: ... config: env: - name: ARGS value: "--zap-log-level=debug" ... Click Save. 3.6. Viewing status information for your broker deployment You can view the status of a series of standard conditions reported by OpenShift Container Platform for your broker deployment. You can also view additional status information provided in the Custom Resource (CR) for your broker deployment. Procedure Open the CR instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift Container Platform as a user that has privileges to view CRs in the project for the broker deployment. View the CR for your deployment. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. In the left pane, click Operators Installed Operator . Click the Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) operator. Click the ActiveMQ Artemis tab. Click the name of the ActiveMQ Artemis instance. View the status of the OpenShift Container Platform conditions for your broker deployment. Using the OpenShift command-line interface: Go to the status section of the CR and view the conditions details. Using the OpenShift Container Platform web console: In the Details tab, scroll down to the Conditions section. A condition has a status and a type. It might also have a reason, a message and other details. A condition has a status value of True if the condition is met, False if the condition is not met, or Unknown if the status of the condition cannot be determined. Note The Valid condition also has a status of Unknown if the CR does not comply with the recommended use of the spec.deploymentPlan.image , spec.deploymentPlan.initImage and the spec.version attribute in a CR. For more information, see Section 6.4.3, "Validation of restrictions applied to automatic upgrades" . Status information is provided for the following conditions: Condition name Displays the status of... Valid The validation of the CR. If the status of the Valid condition is False , the Operator does not complete the reconciliation and update the StatefulSet until you first resolve the issue that caused the false status. Deployed The availability of the StatefulSet, Pods and other resources. Ready A top-level condition which summarizes the other more detailed conditions. The Ready condition has a status of True only if none of the other conditions have a status of False . BrokerPropertiesApplied The properties configured in the CR that use the brokerProperties attribute. For more information about the BrokerPropertiesApplied condition, see Section 4.17, "Configuring items not exposed in the Custom Resource Definition" . JaasPropertiesApplied The Java Authentication and Authorization Service (JAAS) login modules configured in the CR. For more information about the JaasPropertiesApplied condition, see Section 4.3.1, "Configuring JAAS login modules in a secret" . View additional status information for your broker deployment in the status section of the CR. The following additional status information is displayed: deploymentPlanSize The number of broker Pods in the deployment. podstatus The status and name of each broker Pod in the deployment. version The version of the broker and the registry URLs of the broker and init container images that are deployed. upgrade The ability of the Operator to apply major, minor, patch and security updates to the deployment, which is determined by the values of the spec.deploymentPlan.image and spec.version attributes in the CR. If the spec.deploymentPlan.image attribute specifies the registry URL of a broker container image, the status of all upgrade types is False , which means that the Operator cannot upgrade the existing container images. If the spec.deploymentPlan.image attribute is not in the CR or has a value of placeholder , the configuration of the spec.version attribute affects the upgrade status as follows: The status of securityUpdates is True , irrespective of whether the spec.version attribute is configured or its value. The status of patchUpdates is True if the value of the spec.version attribute has only a major and a minor version, for example, '7.10', so the Operator can upgrade to the latest patch version of the container images. The status of minorUpdates is True if the value of the spec.version attribute has only a major version, for example, '7', so the Operator can upgrade to the latest minor and patch versions of the container images. The status of majorUpdates is True if the spec.version attribute is not in the CR, so any available upgrades can be deployed, including an upgrade from 7.x.x to 8.x.x, if this version is available. | [
"mkdir ~/broker/operator mv amq-broker-operator-7.11.7-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.11.7-ocp-install-examples.zip",
"cd amq-broker-operator-7.11.7-ocp-install-examples",
"oc login -u system:admin",
"oc new-project <project_name>",
"oc project <project_name>",
"oc create -f deploy/service_account.yaml",
"oc create -f deploy/role.yaml",
"metadata: name: amq-broker-operator-rolebinding subjects: kind: ServiceAccount name: amq-broker-controller-manager roleRef: kind: Role name: amq-broker-operator-role",
"oc create -f deploy/role_binding.yaml",
"oc create -f deploy/election_role.yaml",
"oc create -f deploy/election_role_binding.yaml",
"- name: WATCH_NAMESPACE value: \"*\" valueFrom: fieldRef: fieldPath: metadata.namespace",
"- name: WATCH_NAMESPACE value: \"namespace1, namespace2\"`.",
"Subjects: - kind: ServiceAccount name: amq-broker-controller-manager namespace: operator-project",
"oc create -f deploy/cluster_role.yaml",
"oc create -f deploy/cluster_role_binding.yaml",
"oc login -u system:admin",
"oc project <project_name>",
"cd ~/broker/operator/amq-broker-operator-7.11.7-ocp-install-examples",
"oc create -f deploy/crds/broker_activemqartemis_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml",
"oc create -f deploy/crds/broker_activemqartemissecurity_crd.yaml",
"oc secrets link --for=pull default <secret_name> oc secrets link --for=pull deployer <secret_name> oc secrets link --for=pull builder <secret_name>",
"spec: template: spec: containers: #image: registry.redhat.io/amq7/amq-broker-rhel8-operator:7.10 image: registry.redhat.io/amq7/amq-broker-rhel8-operator@sha256:d5c10ee9a342d7fd59bdd3cfae25029548f5081c985c9bdf5656dbd5e8877e0e",
"oc create -f deploy/operator.yaml",
"{\"level\":\"info\",\"ts\":1553619035.8302743,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting Controller\",\"controller\":\"activemqartemisaddress-controller\"} {\"level\":\"info\",\"ts\":1553619035.830541,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting Controller\",\"controller\":\"activemqartemis-controller\"} {\"level\":\"info\",\"ts\":1553619035.9306898,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting workers\",\"controller\":\"activemqartemisaddress-controller\",\"worker count\":1} {\"level\":\"info\",\"ts\":1553619035.9311671,\"logger\":\"kubebuilder.controller\",\"msg\":\"Starting workers\",\"controller\":\"activemqartemis-controller\",\"worker count\":1}",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"oc project <project_name>",
"oc create -f <path/to/custom_resource_instance> .yaml",
"oc get pods -o wide NAME STATUS IP amq-broker-operator-54d996c Running 10.129.2.14 ex-aao-ss-0 Running 10.129.2.15",
"oc rsh ex-aao-ss-0",
"sh-4.2USD ./amq-broker/bin/artemis producer --url tcp://10.129.2.15:61616 --destination queue://demoQueue",
"Connection brokerURL = tcp://10.129.2.15:61616 Producer ActiveMQQueue[demoQueue], thread=0 Started to calculate elapsed time Producer ActiveMQQueue[demoQueue], thread=0 Produced: 1000 messages Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in second : 3 s Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in milli second : 3492 milli seconds",
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao spec: deploymentPlan: size: 4 image: placeholder",
"oc login -u <user> -p <password> --server= <host:port>",
"oc project <project_name>",
"oc apply -f <path/to/custom_resource_instance> .yaml",
"targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6f13fb88",
"oc login -u system:admin",
"apiVersion: apps/v1 kind: Deployment metadata: labels: control-plane: controller-manager name: amq-broker-controller-manager spec: containers: - args: - --zap-log-level=error",
"sed 's/--zap-log-level=info/--zap-log-level=error/' deploy/operator.yaml | oc apply -f -",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription spec: config: env: - name: ARGS value: \"--zap-log-level=debug\"",
"get ActiveMQArtemis < CR instance name > -n < namespace > -o yaml"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/deploying_amq_broker_on_openshift/deploying-broker-on-ocp-using-operator_broker-ocp |
Chapter 8. Consumer Groups page | Chapter 8. Consumer Groups page The Consumer Groups page shows all the consumer groups associated with a Kafka cluster. For each consumer group, you can see its status, the overall consumer lag across all partitions, and the number of members. Click on associated topics to show the topic information available from the Topics page tabs . Consumer group status can be one of the following: Stable indicates normal functioning Rebalancing indicates ongoing adjustments to the consumer group's members. Empty suggests no active members. If in the empty state, consider adding members to the group. Check group members by clicking on a consumer group name. Select the options icon (three vertical dots) against a consumer group to reset consumer offsets. 8.1. Checking consumer group members Check the members of a specific consumer group from the Consumer Groups page. Procedure From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Consumer Groups . Click the name of the consumer group you want to check from the Consumer Groups page. Click on the right arrow (>) to a member ID to see the topic partitions a member is associated with, as well as any possible consumer lag. For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions. Any consumer lag for a specific topic partition reflects the gap between the last message a consumer has picked up (committed offset position) and the latest message written by the producer (end offset position). 8.2. Resetting consumer offsets Reset the consumer offsets of a specific consumer group from the Consumer Groups page. You might want to do this when reprocessing old data, skipping unwanted messages, or recovering from downtime. Prerequisites All active members of the consumer group must be shut down before resetting the consumer offsets. Procedure From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Consumer Groups . Click the options icon (three vertical dots) for the consumer group and click the reset consumer offset option to display the Reset consumer offset page. Choose to apply the offset reset to all consumer topics associated with the consumer group or select a specific topic. If you selected a topic, choose to apply the offset reset to all partitions or select a specific partition. Choose the position to reset the offset: Custom offset If you selected custom offset, enter the custom offset value. Latest offset Earliest offset Specific date and time If you selected date and time, choose the appropriate format and enter the date in that format. Click Reset to perform the offset reset. Performing a dry run Before actually executing the offset reset, you can use the dry run option to see which offsets would be reset before applying the change. From the Reset consumer offset page, click the down arrow to Dry run . Choose the option to run and show the results in the console. Or you can copy the dry run command and run it independently against the consumer group. The results in the console show the new offsets for each topic partition included in the reset operation. A download option is available for the results. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_console/con-consumer-groups-page-str |
Chapter 23. Performing health checks on Red Hat Quay deployments | Chapter 23. Performing health checks on Red Hat Quay deployments Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios. Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system's current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker. 23.1. Red Hat Quay health check endpoints Important Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output. Table 23.1. Health check endpoints Health check Description Endpoint Example output instance The instance endpoint acquires the entire status of the specific Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , disk_space , registry_gunicorn , service_key , and web_gunicorn. Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/instance or https://{quay-ip-endpoint}/health {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} endtoend The endtoend endpoint conducts checks on all services of your Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , redis , storage . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/endtoend {"data":{"services":{"auth":true,"database":true,"redis":true,"storage":true}},"status_code":200} warning The warning endpoint conducts a check on the warnings. Returns a dict with key-value pairs for the following: disk_space_warning . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/warning {"data":{"services":{"disk_space_warning":true}},"status_code":503} 23.2. Navigating to a Red Hat Quay health check endpoint Use the following procedure to navigate to the instance endpoint. This procedure can be repeated for endtoend and warning endpoints. Procedure On your web browser, navigate to https://{quay-ip-endpoint}/health/instance . You are taken to the health instance page, which returns information like the following: {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For Red Hat Quay, "status_code": 200 means that the instance is health. Conversely, if you receive "status_code": 503 , there is an issue with your deployment. | [
"{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/health-check-quay |
20.3. Event Sequence of an SSH Connection | 20.3. Event Sequence of an SSH Connection The following series of events help protect the integrity of SSH communication between two hosts. A cryptographic handshake is made so that the client can verify that it is communicating with the correct server. The transport layer of the connection between the client and remote host is encrypted using a symmetric cipher. The client authenticates itself to the server. The remote client interacts with the remote host over the encrypted connection. 20.3.1. Transport Layer The primary role of the transport layer is to facilitate safe and secure communication between the two hosts at the time of authentication and during subsequent communication. The transport layer accomplishes this by handling the encryption and decryption of data, and by providing integrity protection of data packets as they are sent and received. The transport layer also provides compression, speeding the transfer of information. Once an SSH client contacts a server, key information is exchanged so that the two systems can correctly construct the transport layer. The following steps occur during this exchange: Keys are exchanged The public key encryption algorithm is determined The symmetric encryption algorithm is determined The message authentication algorithm is determined The hash algorithm is determined During the key exchange, the server identifies itself to the client with a unique host key . If the client has never communicated with this particular server before, the server's host key is unknown to the client and it does not connect. OpenSSH gets around this problem by accepting the server's host key after the user is notified and verifies the acceptance of the new host key. In subsequent connections, the server's host key is checked against the saved version on the client, providing confidence that the client is indeed communicating with the intended server. If, in the future, the host key no longer matches, the user must remove the client's saved version before a connection can occur. Warning It is possible for an attacker to masquerade as an SSH server during the initial contact since the local system does not know the difference between the intended server and a false one set up by an attacker. To help prevent this, verify the integrity of a new SSH server by contacting the server administrator before connecting for the first time or in the event of a host key mismatch. SSH is designed to work with almost any kind of public key algorithm or encoding format. After an initial key exchange creates a hash value used for exchanges and a shared secret value, the two systems immediately begin calculating new keys and algorithms to protect authentication and future data sent over the connection. After a certain amount of data has been transmitted using a given key and algorithm (the exact amount depends on the SSH implementation), another key exchange occurs, generating another set of hash values and a new shared secret value. Even if an attacker is able to determine the hash and shared secret value, this information is only useful for a limited period of time. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ssh-conn |
34.3. Additional Resources | 34.3. Additional Resources To learn more about configuring automated tasks, refer to the following resources. 34.3.1. Installed Documentation cron man page - overview of cron. crontab man pages in sections 1 and 5 - The man page in section 1 contains an overview of the crontab file. The man page in section 5 contains the format for the file and some example entries. /usr/share/doc/at- <version> /timespec contains more detailed information about the times that can be specified for cron jobs. at man page - description of at and batch and their command line options. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/automated_tasks-additional_resources |
Chapter 10. Setting a custom cryptographic policy by using RHEL system roles | Chapter 10. Setting a custom cryptographic policy by using RHEL system roles Custom cryptographic policies are a set of rules and configurations that manage the use of cryptographic algorithms and protocols. These policies help you to maintain a protected, consistent, and manageable security environment across multiple systems and applications. By using the crypto_policies RHEL system role, you can quickly and consistently configure custom cryptographic policies across many operating systems in an automated fashion. 10.1. Enhancing security with the FUTURE cryptographic policy using the crypto_policies RHEL system role You can use the crypto_policies RHEL system role to configure the FUTURE policy on your managed nodes. This policy helps to achieve for example: Future-proofing against emerging threats: anticipates advancements in computational power. Enhanced security: stronger encryption standards require longer key lengths and more secure algorithms. Compliance with high-security standards: for example in healthcare, telco, and finance the data sensitivity is high, and availability of strong cryptography is critical. Typically, FUTURE is suitable for environments handling highly sensitive data, preparing for future regulations, or adopting long-term security strategies. Warning Legacy systems or software does not have to support the more modern and stricter algorithms and protocols enforced by the FUTURE policy. For example, older systems might not support TLS 1.3 or larger key sizes. This could lead to compatibility problems. Also, using strong algorithms usually increases the computational workload, which could negatively affect your system performance. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true The settings specified in the example playbook include the following: crypto_policies_policy: FUTURE Configures the required cryptographic policy ( FUTURE ) on the managed node. It can be either the base policy or a base policy with some sub-policies. The specified base policy and sub-policies have to be available on the managed node. The default value is null . It means that the configuration is not changed and the crypto_policies RHEL system role will only collect the Ansible facts. crypto_policies_reboot_ok: true Causes the system to reboot after the cryptographic policy change to make sure all of the services and applications will read the new configuration files. The default value is false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Warning Because the FIPS:OSPP system-wide subpolicy contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification, the system is less interoperable after you set it. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note that your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides hosted at the National Information Assurance Partnership (NIAP) website. Verification On the control node, create another playbook named, for example, verify_playbook.yml : --- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active The settings specified in the example playbook include the following: crypto_policies_active An exported Ansible fact that contains the currently active policy name in the format as accepted by the crypto_policies_policy variable. Validate the playbook syntax: Run the playbook: The crypto_policies_active variable shows the active policy on the managed node. Additional resources /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file /usr/share/doc/rhel-system-roles/crypto_policies/ directory update-crypto-policies(8) and crypto-policies(7) manual pages | [
"--- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active",
"ansible-playbook --syntax-check ~/verify_playbook.yml",
"ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { \"crypto_policies_active\": \"FUTURE\" }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/setting-a-custom-cryptographic-policy-by-using-the-crypto-policies-rhel-system-role_automating-system-administration-by-using-rhel-system-roles |
1.2.2. Horizontal Scalability | 1.2.2. Horizontal Scalability Red Hat's efforts in improving the performance of Red Hat Enterprise Linux 6 focus on scalability . Performance-boosting features are evaluated primarily based on how they affect the platform's performance in different areas of the workload spectrum - that is, from the lonely web server to the server farm mainframe. Focusing on scalability allows Red Hat Enterprise Linux to maintain its versatility for different types of workloads and purposes. At the same time, this means that as your business grows and your workload scales up, re-configuring your server environment is less prohibitive (in terms of cost and man-hours) and more intuitive. Red Hat makes improvements to Red Hat Enterprise Linux for both horizontal scalability and vertical scalability ; however, horizontal scalability is the more generally applicable use case. The idea behind horizontal scalability is to use multiple standard computers to distribute heavy workloads in order to improve performance and reliability. In a typical server farm, these standard computers come in the form of 1U rack-mounted servers and blade servers. Each standard computer may be as small as a simple two-socket system, although some server farms use large systems with more sockets. Some enterprise-grade networks mix large and small systems; in such cases, the large systems are high performance servers (for example, database servers) and the small ones are dedicated application servers (for example, web or mail servers). This type of scalability simplifies the growth of your IT infrastructure: a medium-sized business with an appropriate load might only need two pizza box servers to suit all their needs. As the business hires more people, expands its operations, increases its sales volumes and so forth, its IT requirements increase in both volume and complexity. Horizontal scalability allows IT to simply deploy additional machines with (mostly) identical configurations as their predecessors. To summarize, horizontal scalability adds a layer of abstraction that simplifies system hardware administration. By developing the Red Hat Enterprise Linux platform to scale horizontally, increasing the capacity and performance of IT services can be as simple as adding new, easily configured machines. 1.2.2.1. Parallel Computing Users benefit from Red Hat Enterprise Linux's horizontal scalability not just because it simplifies system hardware administration; but also because horizontal scalability is a suitable development philosophy given the current trends in hardware advancement. Consider this: most complex enterprise applications have thousands of tasks that must be performed simultaneously, with different coordination methods between tasks. While early computers had a single-core processor to juggle all these tasks, virtually all processors available today have multiple cores. Effectively, modern computers put multiple cores in a single socket, making even single-socket desktops or laptops multi-processor systems. As of 2010, standard Intel and AMD processors were available with two to sixteen cores. Such processors are prevalent in pizza box or blade servers, which can now contain as many as 40 cores. These low-cost, high-performance systems bring large system capabilities and characteristics into the mainstream. To achieve the best performance and utilization of a system, each core must be kept busy. This means that 32 separate tasks must be running to take advantage of a 32-core blade server. If a blade chassis contains ten of these 32-core blades, then the entire setup can process a minimum of 320 tasks simultaneously. If these tasks are part of a single job, they must be coordinated. Red Hat Enterprise Linux was developed to adapt well to hardware development trends and ensure that businesses can fully benefit from them. Section 1.2.3, "Distributed Systems" explores the technologies that enable Red Hat Enterprise Linux's horizontal scalability in greater detail. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/intro-horizontal |
8.126. nss and nspr | 8.126. nss and nspr 8.126.1. RHBA-2013:1558 - nss and nspr bug fix and enhancement update Updated nss and nspr packages that fix a number of bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Network Security Services ( NSS ) is a set of libraries designed to support the cross-platform development of security-enabled client and server applications. Netscape Portable Runtime ( NSPR ) provides platform independence for non-GUI operating system facilities. Note The nss family of packages, consisting of nss , nss-softokn , and nss-util , has been upgraded to the higher upstream versions, which provide a number of bug fixes and enhancements over the versions: The nss package has been upgraded to the upstream version 3.15.1. (BZ# 918950 , BZ# 1002645 ) The nss-softokn package has been upgraded to the upstream version 3.14.3 (BZ# 919172 ) The nss-util package has been upgraded to the upstream version 3.15.1 (BZ# 919174 , BZ# 1002644 ) The nspr package has been upgraded to upstream version 4.10, which provides a number of bug fixes and enhancements over the version. (BZ# 919180 , BZ# 1002643 ) Bug Fixes BZ# 702083 The PEM module imposed restrictions on client applications to use unique base file names upon which certificates were derived. Consequently, client applications certifications and keys with the same base name but different file paths failed to load because they were incorrectly deemed to be duplicates. The comparison algorithm has been modified and the PEM module now correctly determines uniqueness regardless of how users name their files. BZ# 882408 Due to differences in the upstream version of the nss package, an attempt to enable the unsupported SSL PKCS#11 bypass feature failed with a fatal error message. This behavior could break the semantics of certain calls, thus breaking the Application Binary Interface ( ABI ) compatibility. With this update, the nss package has been modified to preserve the upstream behavior. As a result, an attempt to enable SSL PKCS#11 bypass no longer fails. BZ# 903017 Previously, there was a race condition in the certification code related to smart cards. Consequently, when Common Access Card ( CAC ) or Personal Identity Verification ( PIV ) smart cards certificates were viewed in the Firefox certificate manager, the Firefox web browser became unresponsive. The underlying source code has been modified to fix the race condition and Firefox no longer hangs in the described scenario. BZ# 905013 Due to errors in the Netscape Portable Runtime ( NSPR ) code responsible for thread synchronization, memory corruption sometimes occurred. Consequently, the web server daemon ( httpd ) sometimes terminated unexpectedly with a segmentation fault after making more than 1023 calls to the NSPR library. With this update, an improvement to the way NSPR frees previously allocated memory has been made and httpd no longer crashes in the described scenario. BZ# 918136 With the 3.14 upstream version of the nss package, support for certificate signatures using the MD5 hash algorithm in digital signatures has been disabled by default. However, certain websites still use MD5-based signatures and therefore an attempt to access such a website failed with an error. With this update, MD5 hash algorithm in digital signatures is supported again so that users can connect to the websites using this algorithm as expected. BZ# 976572 With this update, fixes to the implementation of Galois/Counter Mode ( GCM ) have been backported to the nss package since the upstream version 3.14.1. As a result, users can use GCM without any problems already documented and fixed in the upstream version. BZ# 977341 Previously, the output of the certutil -H command, which is a list of options and arguments used by the certutil utility, did not describe the -F option. This information has been added and the option is now properly described in the output of certutil -H . BZ# 988083 Previously, the pkcs11n.h header was missing certain constants to support the Transport Layer Security ( TLS ) 1.2 protocol. The constants have been added to the nss-util package and NSS now supports TLS 1.2 as expected. BZ# 990631 Previously, Network Security Service ( NSS ) reverted the permission rights for the pkcs11.txt file so that only the owner of the file could read it and write to it. This behavior overwrote other permissions specified by the user. Consequently, users were prevented from adding security modules to their own configuration using the system-wide security databases. This update provides a patch to fix this bug. As a result, NSS preserves the existing permissions for pkcs11.txt and users are now able to modify the NSS security module database. BZ# 1008534 Due to a bug in Network Security Services ( NSS ), the installation of the IPA (Identity, Policy, Audit) server terminated unexpectedly and an error was returned. This bug has been fixed with this update and installation of the IPA server now proceeds as expected. BZ# 1010224 The NSS softoken cryptographic module did not ensure whether the freebl library had been properly initialized before running its self test. Consequently, certain clients, such as the Lightweight Directory Access Protocol ( LDAP ) client, could initialize and finalize NSS. In such a case, freebl was cleaned up and unloaded. When the library was loaded again, an attempt to run the test terminated unexpectedly causing client failures such as Transport Layer Security ( TLS ) connection errors. This bug has been fixed and softoken now correctly initializes freebl before running self tests. As a result, the failures no longer occur in the described scenario. Enhancements BZ# 960193 , BZ# 960208 Network Security Services's ( NSS ) own internal cryptographic module in Red Hat Enterprise Linux 6.5 now supports the NIST Suite B set of recommended algorithms for Elliptic curve cryptography ( ECC ). Users of nss and nsrp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. After installing this update, applications using NSS or NSPR must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/nss-and-nspr |
Installing on IBM Z and IBM LinuxONE | Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.13 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_z_and_ibm_linuxone/index |
Overcloud Parameters | Overcloud Parameters Red Hat OpenStack Platform 17.0 Parameters for customizing the core template collection for a Red Hat OpenStack Platform overcloud OpenStack Documentation Team [email protected] Abstract This guide lists parameters that might be used in the deployment of OpenStack using the Orchestration service (heat). The parameters and definitions are extracted from the upstream source code, and not all parameters that are listed can be used in a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/index |
4.4. Creating a Cluster | 4.4. Creating a Cluster Creating a cluster with luci consists of naming a cluster, adding cluster nodes to the cluster, entering the ricci passwords for each node, and submitting the request to create a cluster. If the node information and passwords are correct, Conga automatically installs software into the cluster nodes (if the appropriate software packages are not currently installed) and starts the cluster. Create a cluster as follows: Click Manage Clusters from the menu on the left side of the luci Homebase page. The Clusters screen appears, as shown in Figure 4.2, "luci cluster management page" . Figure 4.2. luci cluster management page Click Create . The Create New Cluster dialog box appears, as shown in Figure 4.3, "luci cluster creation dialog box" . Figure 4.3. luci cluster creation dialog box Enter the following parameters on the Create New Cluster dialog box, as necessary: At the Cluster Name text box, enter a cluster name. The cluster name cannot exceed 15 characters. If each node in the cluster has the same ricci password, you can check Use the same password for all nodes to autofill the password field as you add nodes. Enter the node name for a node in the cluster in the Node Name column. A node name can be up to 255 bytes in length. After you have entered the node name, the node name is reused as the ricci host name. If your system is configured with a dedicated private network that is used only for cluster traffic, you may want to configure luci to communicate with ricci on an address that is different from the address to which the cluster node name resolves. You can do this by entering that address as the Ricci Hostname . As of Red Hat Enterprise Linux 6.9, after you have entered the node name and, if necessary, adjusted the ricci host name, the fingerprint of the certificate of the ricci host is displayed for confirmation. You can verify whether this matches the expected fingerprint. If it is legitimate, enter the ricci password and add the node. You can remove the fingerprint display by clicking on the display window, and you can restore this display (or enforce it at any time) by clicking the View Certificate Fingerprints button. Important It is strongly advised that you verify the certificate fingerprint of the ricci server you are going to authenticate against. Providing an unverified entity on the network with the ricci password may constitute a confidentiality breach, and communication with an unverified entity may cause an integrity breach. If you are using a different port for the ricci agent than the default of 11111, you can change that parameter. Click Add Another Node and enter the node name and ricci password for each additional node in the cluster. Figure 4.4, "luci cluster creation with certificate fingerprint display" . shows the Create New Cluster dialog box after two nodes have been entered, showing the certificate fingerprints of the ricci hosts (Red Hat Enterprise Linux 6.9 and later). Figure 4.4. luci cluster creation with certificate fingerprint display If you do not want to upgrade the cluster software packages that are already installed on the nodes when you create the cluster, leave the Use locally installed packages option selected. If you want to upgrade all cluster software packages, select the Download Packages option. Note Whether you select the Use locally installed packages or the Download Packages option, if any of the base cluster components are missing ( cman , rgmanager , modcluster and all their dependencies), they will be installed. If they cannot be installed, the node creation will fail. Check Reboot nodes before joining cluster if desired. Select Enable shared storage support if clustered storage is required; this downloads the packages that support clustered storage and enables clustered LVM. You should select this only when you have access to the Resilient Storage Add-On or the Scalable File System Add-On. Click Create Cluster . Clicking Create Cluster causes the following actions: If you have selected Download Packages , the cluster software packages are downloaded onto the nodes. Cluster software is installed onto the nodes (or it is verified that the appropriate software packages are installed). The cluster configuration file is updated and propagated to each node in the cluster. The added nodes join the cluster. A message is displayed indicating that the cluster is being created. When the cluster is ready, the display shows the status of the newly created cluster, as shown in Figure 4.5, "Cluster node display" . Note that if ricci is not running on any of the nodes, the cluster creation will fail. Figure 4.5. Cluster node display After clicking Create Cluster , you can add or delete nodes from the cluster by clicking the Add or Delete function from the menu at the top of the cluster node display page. Unless you are deleting an entire cluster, nodes must be stopped before being deleted. For information on deleting a node from an existing cluster that is currently in operation, see Section 5.3.4, "Deleting a Member from a Cluster" . Warning Removing a cluster node from the cluster is a destructive operation that cannot be undone. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-creating-cluster-conga-CA |
Preface | Preface As a developer of business decisions, you can use Red Hat build of Kogito to build cloud-native applications that adapt your business domain and tooling. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/pr01 |
Chapter 10. Namespace [v1] | Chapter 10. Namespace [v1] Description Namespace provides a scope for Names. Use of multiple namespaces is optional. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NamespaceSpec describes the attributes on a Namespace. status object NamespaceStatus is information about the current status of a Namespace. 10.1.1. .spec Description NamespaceSpec describes the attributes on a Namespace. Type object Property Type Description finalizers array (string) Finalizers is an opaque list of values that must be empty to permanently remove object from storage. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ 10.1.2. .status Description NamespaceStatus is information about the current status of a Namespace. Type object Property Type Description conditions array Represents the latest available observations of a namespace's current state. conditions[] object NamespaceCondition contains details about state of namespace. phase string Phase is the current lifecycle phase of the namespace. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ Possible enum values: - "Active" means the namespace is available for use in the system - "Terminating" means the namespace is undergoing graceful termination 10.1.3. .status.conditions Description Represents the latest available observations of a namespace's current state. Type array 10.1.4. .status.conditions[] Description NamespaceCondition contains details about state of namespace. Type object Required type status Property Type Description lastTransitionTime Time message string reason string status string Status of the condition, one of True, False, Unknown. type string Type of namespace controller condition. 10.2. API endpoints The following API endpoints are available: /api/v1/namespaces GET : list or watch objects of kind Namespace POST : create a Namespace /api/v1/watch/namespaces GET : watch individual changes to a list of Namespace. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{name} DELETE : delete a Namespace GET : read the specified Namespace PATCH : partially update the specified Namespace PUT : replace the specified Namespace /api/v1/watch/namespaces/{name} GET : watch changes to an object of kind Namespace. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{name}/status GET : read status of the specified Namespace PATCH : partially update status of the specified Namespace PUT : replace status of the specified Namespace /api/v1/namespaces/{name}/finalize PUT : replace finalize of the specified Namespace 10.2.1. /api/v1/namespaces Table 10.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list or watch objects of kind Namespace Table 10.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.3. HTTP responses HTTP code Reponse body 200 - OK NamespaceList schema 401 - Unauthorized Empty HTTP method POST Description create a Namespace Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body Namespace schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 202 - Accepted Namespace schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/namespaces Table 10.7. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Namespace. deprecated: use the 'watch' parameter with a list operation instead. Table 10.8. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{name} Table 10.9. Global path parameters Parameter Type Description name string name of the Namespace Table 10.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Namespace Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.12. Body parameters Parameter Type Description body DeleteOptions schema Table 10.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Namespace Table 10.14. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Namespace Table 10.15. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.16. Body parameters Parameter Type Description body Patch schema Table 10.17. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Namespace Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.19. Body parameters Parameter Type Description body Namespace schema Table 10.20. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{name} Table 10.21. Global path parameters Parameter Type Description name string name of the Namespace Table 10.22. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Namespace. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.23. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{name}/status Table 10.24. Global path parameters Parameter Type Description name string name of the Namespace Table 10.25. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Namespace Table 10.26. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Namespace Table 10.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.28. Body parameters Parameter Type Description body Patch schema Table 10.29. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Namespace Table 10.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.31. Body parameters Parameter Type Description body Namespace schema Table 10.32. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty 10.2.6. /api/v1/namespaces/{name}/finalize Table 10.33. Global path parameters Parameter Type Description name string name of the Namespace Table 10.34. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method PUT Description replace finalize of the specified Namespace Table 10.35. Body parameters Parameter Type Description body Namespace schema Table 10.36. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/metadata_apis/namespace-v1 |
Chapter 2. cinder | Chapter 2. cinder The following chapter contains information about the configuration options in the cinder service. 2.1. cinder.conf This section contains options for the /etc/cinder/cinder.conf file. 2.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/cinder/cinder.conf file. . Configuration option = Default value Type Description allocated_capacity_weight_multiplier = -1.0 floating point value Multiplier used for weighing allocated capacity. Positive numbers mean to stack vs spread. allow_availability_zone_fallback = False boolean value If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing. allow_compression_on_image_upload = False boolean value The strategy to use for image compression on upload. Default is disallow compression. allowed_direct_url_schemes = [] list value A list of url schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file, cinder]. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service api_rate_limit = True boolean value Enables or disables rate limit of the API. as13000_ipsan_pools = ['Pool0'] list value The Storage Pools Cinder should use, a comma separated list. as13000_meta_pool = None string value The pool which is used as a meta pool when creating a volume, and it should be a replication pool at present. If not set, the driver will choose a replication pool from the value of as13000_ipsan_pools. as13000_token_available_time = 3300 integer value The effective time of token validity in seconds. auth_strategy = keystone string value The strategy to use for auth. Supports noauth or keystone. az_cache_duration = 3600 integer value Cache volume availability zones in memory for the provided duration in seconds backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. backend_availability_zone = None string value Availability zone for this volume backend. If not set, the storage_availability_zone option value is used as the default for all backends. backend_stats_polling_interval = 60 integer value Time in seconds between requests for usage statistics from the backend. Be aware that generating usage statistics is expensive for some backends, so setting this value too low may adversely affect performance. backup_api_class = cinder.backup.api.API string value The full class name of the volume backup API class backup_ceph_chunk_size = 134217728 integer value The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store. backup_ceph_conf = /etc/ceph/ceph.conf string value Ceph configuration file to use. backup_ceph_image_journals = False boolean value If True, apply JOURNALING and EXCLUSIVE_LOCK feature bits to the backup RBD objects to allow mirroring backup_ceph_pool = backups string value The Ceph pool where volume backups are stored. backup_ceph_stripe_count = 0 integer value RBD stripe count to use when creating a backup image. backup_ceph_stripe_unit = 0 integer value RBD stripe unit to use when creating a backup image. backup_ceph_user = cinder string value The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None. backup_compression_algorithm = zlib string value Compression algorithm ("none" to disable) backup_container = None string value Custom directory to use for backups. backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver string value Driver to use for backups. backup_driver_init_check_interval = 60 integer value Time in seconds between checks to see if the backup driver has been successfully initialized, any time the driver is restarted. backup_driver_status_check_interval = 60 integer value Time in seconds between checks of the backup driver status. If does not report as working, it is restarted. backup_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer. backup_file_size = 1999994880 integer value The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes. backup_gcs_block_size = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_gcs_object_size has to be multiple of backup_gcs_block_size. backup_gcs_bucket = None string value The GCS bucket to use. backup_gcs_bucket_location = US string value Location of GCS bucket. backup_gcs_credential_file = None string value Absolute path of GCS service account credential file. backup_gcs_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the GCS backend storage. The default value is True to enable the timer. backup_gcs_num_retries = 3 integer value Number of times to retry. backup_gcs_object_size = 52428800 integer value The size in bytes of GCS backup objects. backup_gcs_project_id = None string value Owner project id for GCS bucket. backup_gcs_proxy_url = None uri value URL for http proxy access. backup_gcs_reader_chunk_size = 2097152 integer value GCS object will be downloaded in chunks of bytes. backup_gcs_retry_error_codes = ['429'] list value List of GCS error codes. backup_gcs_storage_class = NEARLINE string value Storage class of GCS bucket. backup_gcs_user_agent = gcscinder string value Http user-agent string for gcs api. backup_gcs_writer_chunk_size = 2097152 integer value GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the file is to be uploaded as a single chunk. backup_manager = cinder.backup.manager.BackupManager string value Full class name for the Manager for volume backup backup_metadata_version = 2 integer value Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version. backup_mount_attempts = 3 integer value The number of attempts to mount NFS shares before raising an error. backup_mount_options = None string value Mount options passed to the NFS client. See NFS man page for details. backup_mount_point_base = USDstate_path/backup_mount string value Base dir containing mount point for NFS share. backup_name_template = backup-%s string value Template string to be used to generate backup names backup_native_threads_pool_size = 60 integer value Size of the native threads pool for the backups. Most backup drivers rely heavily on this, it can be decreased for specific drivers that don't. backup_object_number_per_notification = 10 integer value The number of chunks or objects, for which one Ceilometer notification will be sent backup_posix_path = USDstate_path/backup string value Path specifying where to store backups. backup_service_inithost_offload = True boolean value Offload pending backup delete during backup service startup. If false, the backup service will remain down until all pending backups are deleted. backup_sha_block_size_bytes = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes. backup_share = None string value NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format. backup_swift_auth = per_user string value Swift authentication mechanism (per_user or single_user). backup_swift_auth_insecure = False boolean value Bypass verification of server certificate when making SSL connection to Swift. backup_swift_auth_url = None uri value The URL of the Keystone endpoint backup_swift_auth_version = 1 string value Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0 or "3" for auth 3.0 backup_swift_block_size = 32768 integer value The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size. backup_swift_ca_cert_file = None string value Location of the CA certificate file to use for swift client requests. backup_swift_container = volumebackups string value The default Swift container to use backup_swift_enable_progress_timer = True boolean value Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer. backup_swift_key = None string value Swift key for authentication backup_swift_object_size = 52428800 integer value The size in bytes of Swift backup objects backup_swift_project = None string value Swift project/account name. Required when connecting to an auth 3.0 system backup_swift_project_domain = None string value Swift project domain name. Required when connecting to an auth 3.0 system backup_swift_retry_attempts = 3 integer value The number of retries to make for Swift operations backup_swift_retry_backoff = 2 integer value The backoff time in seconds between Swift retries backup_swift_tenant = None string value Swift tenant/account name. Required when connecting to an auth 2.0 system backup_swift_url = None uri value The URL of the Swift endpoint backup_swift_user = None string value Swift user name backup_swift_user_domain = None string value Swift user domain name. Required when connecting to an auth 3.0 system backup_timer_interval = 120 integer value Interval, in seconds, between two progress notifications reporting the backup status backup_tsm_compression = True boolean value Enable or Disable compression for backups backup_tsm_password = password string value TSM password for the running username backup_tsm_volume_prefix = backup string value Volume prefix for the backup id when backing up to TSM backup_use_same_host = False boolean value Backup services use same backend. backup_use_temp_snapshot = False boolean value If this is set to True, a temporary snapshot will be created for performing non-disruptive backups. Otherwise a temporary volume will be cloned in order to perform a backup. backup_workers = 1 integer value Number of backup processes to launch. Improves performance with concurrent backups. capacity_weight_multiplier = 1.0 floating point value Multiplier used for weighing free capacity. Negative numbers mean to stack vs spread. `chap_password = ` string value Password for specified CHAP account name. `chap_username = ` string value CHAP user name. chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf string value Chiscsi (CXT) global defaults configuration file cinder_internal_tenant_project_id = None string value ID of the project which will be used as the Cinder internal tenant. cinder_internal_tenant_user_id = None string value ID of the user to be used in volume operations as the Cinder internal tenant. client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. clone_volume_timeout = 680 integer value Create clone volume timeout cloned_volume_same_az = True boolean value Ensure that the new volumes are the same AZ as snapshot or source volume cluster = None string value Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. Active-Active is not yet supported. compression_format = gzip string value Image compression format on image upload compute_api_class = cinder.compute.nova.API string value The full class name of the compute API class to use config-dir = ['~/.project/project.conf.d/', '~/project.conf.d/', '/etc/project/project.conf.d/', '/etc/project.conf.d/'] list value Path to a config directory to pull *.conf files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s) specified via --config-file, arguments hence over-ridden options in the directory take precedence. This option must be set from the command-line. config-file = ['~/.project/project.conf', '~/project.conf', '/etc/project/project.conf', '/etc/project.conf'] unknown value Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. Defaults to %(default)s. This option must be set from the command-line. config_source = [] list value Lists configuration groups that provide more details for accessing configuration settings from locations other than local files. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consistencygroup_api_class = cinder.consistencygroup.api.API string value The full class name of the consistencygroup API class control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. db_driver = cinder.db string value Driver to use for database access debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_availability_zone = None string value Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes. default_group_type = None string value Default group type to use default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_volume_type = None string value Default volume type to use driver_client_cert = None string value The path to the client certificate for verification, if the driver supports it. driver_client_cert_key = None string value The path to the client certificate key for verification, if the driver supports it. driver_data_namespace = None string value Namespace for driver private data values to be saved in. driver_ssl_cert_path = None string value Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend driver_ssl_cert_verify = False boolean value If set to True the http client will validate the SSL certificate of the backend endpoint. driver_use_ssl = False boolean value Tell driver to use SSL for connection to backend storage if the driver supports it. dsware_isthin = False boolean value The flag of thin storage allocation. `dsware_manager = ` string value Fusionstorage manager ip addr for cinder-volume. `dsware_rest_url = ` string value The address of FusionStorage array. For example, "dsware_rest_url=xxx" `dsware_storage_pools = ` string value The list of pools on the FusionStorage array, the semicolon(;) was used to split the storage pools, "dsware_storage_pools = xxx1; xxx2; xxx3" enable_force_upload = False boolean value Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it. enable_new_services = True boolean value Services to be added to the available pool on create enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. enable_v2_api = True boolean value DEPRECATED: Deploy v2 of the Cinder API. enable_v3_api = True boolean value Deploy v3 of the Cinder API. enabled_backends = None list value A list of backend names to use. These backend names should be backed by a unique [CONFIG] group with its options enforce_multipath_for_image_xfer = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. filter_function = None string value String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. `fusionstorageagent = ` string value Fusionstorage agent ip addr range glance_api_insecure = False boolean value Allow to perform insecure SSL (https) requests to glance (https will be used but cert validation will not be performed). glance_api_servers = None list value A list of the URLs of glance API servers available to cinder ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to http. glance_api_ssl_compression = False boolean value Enables or disables negotiation of SSL layer compression. In some cases disabling compression can improve data throughput, such as when high network bandwidth is available and you use compressed image formats like qcow2. glance_ca_certificates_file = None string value Location of ca certificates file to use for glance client requests. glance_catalog_info = image:glance:publicURL string value Info to match when looking for glance in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if glance_api_servers are not provided. glance_core_properties = ['checksum', 'container_format', 'disk_format', 'image_name', 'image_id', 'min_disk', 'min_ram', 'name', 'size'] list value Default core properties of image glance_num_retries = 0 integer value Number retries when downloading an image from glance glance_request_timeout = None integer value http/https timeout value for glance operations. If no value (None) is supplied here, the glanceclient default value is used. glusterfs_backup_mount_point = USDstate_path/backup_mount string value Base dir containing mount point for gluster share. glusterfs_backup_share = None string value GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol goodness_function = None string value String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. group_api_class = cinder.group.api.API string value The full class name of the group API class host = <based on operating system> host address value Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address. iet_conf = /etc/iet/ietd.conf string value IET configuration file image_compress_on_upload = True boolean value When possible, compress images uploaded to the image service image_conversion_dir = USDstate_path/conversion string value Directory used for temporary storage during image conversion image_upload_use_cinder_backend = False boolean value If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service. image_upload_use_internal_tenant = False boolean value If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant. image_volume_cache_enabled = False boolean value Enable the image volume cache for this backend. image_volume_cache_max_count = 0 integer value Max number of entries allowed in the image volume cache. 0 ⇒ unlimited. image_volume_cache_max_size_gb = 0 integer value Max size of the image volume cache for this backend in GB. 0 ⇒ unlimited. infortrend_cli_cache = False boolean value The Infortrend CLI cache. While set True, the RAID status report will use cache stored in the CLI. Never enable this unless the RAID is managed only by Openstack and only by one infortrend cinder-volume backend. Otherwise, CLI might report out-dated status to cinder and thus there might be some race condition among all backend/CLIs. infortrend_cli_max_retries = 5 integer value The maximum retry times if a command fails. infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar string value The Infortrend CLI absolute path. infortrend_cli_timeout = 60 integer value The timeout for CLI in seconds. infortrend_iqn_prefix = iqn.2002-10.com.infortrend string value Infortrend iqn prefix for iSCSI. `infortrend_pools_name = ` list value The Infortrend logical volumes name list. It is separated with comma. `infortrend_slots_a_channels_id = ` list value Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. `infortrend_slots_b_channels_id = ` list value Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. init_host_max_objects_retrieval = 0 integer value Max number of volumes and snapshots to be retrieved per batch during volume manager host initialization. Query results will be obtained in batches from the database and not in one shot to avoid extreme memory usage. Set 0 to turn off this functionality. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. instorage_mcs_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create instorage_mcs_iscsi_chap_enabled = True boolean value Configure CHAP authentication for iSCSI connections (Default: Enabled) instorage_mcs_localcopy_rate = 50 integer value Specifies the InStorage LocalCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-100. instorage_mcs_localcopy_timeout = 120 integer value Maximum number of seconds to wait for LocalCopy to be prepared. instorage_mcs_vol_autoexpand = True boolean value Storage system autoexpand parameter for volumes (True/False) instorage_mcs_vol_compression = False boolean value Storage system compression option for volumes instorage_mcs_vol_grainsize = 256 integer value Storage system grain size parameter for volumes (32/64/128/256) instorage_mcs_vol_intier = True boolean value Enable InTier for volumes instorage_mcs_vol_iogrp = 0 string value The I/O group in which to allocate volumes. It can be a comma-separated list in which case the driver will select an io_group based on least number of volumes associated with the io_group. instorage_mcs_vol_rsize = 2 integer value Storage system space-efficiency parameter for volumes (percentage) instorage_mcs_vol_warning = 0 integer value Storage system threshold for volume capacity warnings (percentage) instorage_mcs_volpool_name = ['volpool'] list value Comma separated list of storage system storage pools for volumes. instorage_san_secondary_ip = None string value Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. iscsi_iotype = fileio string value Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device iscsi_secondary_ip_addresses = [] list value The list of secondary IP addresses of the iSCSI daemon `iscsi_target_flags = ` string value Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. iscsi_write_cache = on string value Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if target_helper is set to tgtadm. iser_helper = tgtadm string value The name of the iSER target user-land tool to use iser_ip_address = USDmy_ip string value The IP address that the iSER daemon is listening on iser_port = 3260 port value The port that the iSER daemon is listening on iser_target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSER volumes java_path = /usr/bin/java string value The Java absolute path. keystone_catalog_info = identity:Identity Service:publicURL string value Info to match when looking for keystone in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_auth_url is unset log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter manager_ips = {} dict value This option is to support the FSA to mount across the different nodes. The parameters takes the standard dict config form, manager_ips = host1:ip1, host2:ip2... max_age = 0 integer value Number of seconds between subsequent usage refreshes max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_over_subscription_ratio = 20.0 string value Representation of the over subscription ratio when thin provisioning is enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. If ratio is auto , Cinder will automatically calculate the ratio based on the provisioned capacity and the used space. If not set to auto, the ratio has to be a minimum of 1.0. message_reap_interval = 86400 integer value interval between periodic task runs to clean expired messages in seconds. message_ttl = 2592000 integer value message minimum life in seconds. migration_create_volume_timeout_secs = 300 integer value Timeout for creating the volume to migrate to when performing volume migration (seconds) monkey_patch = False boolean value Enable monkey patching monkey_patch_modules = [] list value List of modules/decorators to monkey patch my_ip = <based on operating system> host address value IP address of this host no_snapshot_gb_quota = False boolean value Whether snapshots count against gigabyte quota num_iser_scan_tries = 3 integer value The maximum number of times to rescan iSER target to find volume num_shell_tries = 3 integer value Number of times to attempt to run flakey shell commands num_volume_device_scan_tries = 3 integer value The maximum number of times to rescan targets to find volume nvmet_ns_id = 10 integer value The namespace id associated with the subsystem that will be created with the path for the LVM volume. nvmet_port_id = 1 port value The port that the NVMe target is listening on. osapi_max_limit = 1000 integer value The maximum number of items that a collection resource returns in a single response osapi_volume_ext_list = [] list value Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions osapi_volume_extension = ['cinder.api.contrib.standard_extensions'] multi valued osapi volume extension to load osapi_volume_listen = 0.0.0.0 string value IP address on which OpenStack Volume API listens osapi_volume_listen_port = 8776 port value Port on which OpenStack Volume API listens osapi_volume_use_ssl = False boolean value Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified. osapi_volume_workers = None integer value Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available. per_volume_size_limit = -1 integer value Max size allowed per volume, in gigabytes periodic_fuzzy_delay = 60 integer value Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 60 integer value Interval, in seconds, between running periodic tasks pool_id_filter = [] list value Pool id permit to use pool_type = default string value Pool type, like sata-2copy public_endpoint = None string value Public url to use for versions endpoint. The default is None, which will use the request's host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy's URL. publish_errors = False boolean value Enables or disables publication of error events. quota_backup_gigabytes = 1000 integer value Total amount of storage, in gigabytes, allowed for backups per project quota_backups = 10 integer value Number of volume backups allowed per project quota_consistencygroups = 10 integer value Number of consistencygroups allowed per project quota_driver = cinder.quota.DbQuotaDriver string value Default driver to use for quota checks quota_gigabytes = 1000 integer value Total amount of storage, in gigabytes, allowed for volumes and snapshots per project quota_groups = 10 integer value Number of groups allowed per project quota_snapshots = 10 integer value Number of volume snapshots allowed per project quota_volumes = 10 integer value Number of volumes allowed per project rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. reinit_driver_count = 3 integer value Maximum times to reintialize the driver if volume initialization fails. The interval of retry is exponentially backoff, and will be 1s, 2s, 4s etc. replication_device = None dict value Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2... report_discard_supported = False boolean value Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. report_interval = 10 integer value Interval, in seconds, between nodes reporting state to datastore reservation_clean_interval = USDreservation_expire integer value Interval between periodic task runs to clean expired reservations in seconds. reservation_expire = 86400 integer value Number of seconds until a reservation expires reserved_percentage = 0 integer value The percentage of backend capacity is reserved resource_query_filters_file = /etc/cinder/resource_filters.json string value Json file indicating user visible filter parameters for list queries. restore_discard_excess_bytes = True boolean value If True, always discard excess bytes when restoring volumes i.e. pad with zeroes. rootwrap_config = /etc/cinder/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? scheduler_default_filters = ['AvailabilityZoneFilter', 'CapacityFilter', 'CapabilitiesFilter'] list value Which filter class names to use for filtering hosts when not specified in the request. scheduler_default_weighers = ['CapacityWeigher'] list value Which weigher class names to use for weighing hosts. scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler string value Default scheduler driver to use scheduler_driver_init_wait_time = 60 integer value Maximum time in seconds to wait for the driver to report as ready scheduler_host_manager = cinder.scheduler.host_manager.HostManager string value The scheduler host manager class to use `scheduler_json_config_location = ` string value Absolute path to scheduler configuration JSON file. scheduler_manager = cinder.scheduler.manager.SchedulerManager string value Full class name for the Manager for scheduler scheduler_max_attempts = 3 integer value Maximum number of attempts to schedule a volume scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler string value Which handler to use for selecting the host/pool after weighing scst_target_driver = iscsi string value SCST target implementation can choose from multiple SCST target drivers. scst_target_iqn_name = None string value Certain ISCSI targets have predefined target names, SCST target driver uses this name. service_down_time = 60 integer value Maximum time since last check-in for a service to be considered up snapshot_name_template = snapshot-%s string value Template string to be used to generate snapshot names snapshot_same_host = True boolean value Create volume from snapshot at the host where snapshot resides split_loggers = False boolean value Log requests to multiple loggers. ssh_hosts_key_file = USDstate_path/ssh_known_hosts string value File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=USDstate_path/ssh_known_hosts state_path = /var/lib/cinder string value Top-level directory for maintaining cinder's state storage_availability_zone = nova string value Availability zone of this node. Can be overridden per volume backend with the option "backend_availability_zone". storage_protocol = iscsi string value Protocol for transferring data between host and storage back-end. storpool_replication = 3 integer value The default StorPool chain replication value. Used when creating a volume with no specified type if storpool_template is not set. Also used for calculating the apparent free space reported in the stats. storpool_template = None string value The StorPool template for volumes with no type. strict_ssh_host_key_policy = False boolean value Option to enable strict host key checking. When set to "True" Cinder will only connect to systems with a host key present in the configured "ssh_hosts_key_file". When set to "False" the host key will be saved upon first connection and used for subsequent connections. Default=False swift_catalog_info = object-store:swift:publicURL string value Info to match when looking for swift in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_url is unset syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. target_helper = tgtadm string value Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, spdk-nvmeof for SPDK NVMe-oF, or fake for testing. target_ip_address = USDmy_ip string value The IP address that the iSCSI daemon is listening on target_port = 3260 port value The port that the iSCSI daemon is listening on target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSCSI volumes target_protocol = iscsi string value Determines the target protocol for new volumes, created with tgtadm, lioadm and nvmet target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser", in case of nvmet target set to "nvmet_rdma". tcp_keepalive = True boolean value Sets the value of TCP_KEEPALIVE (True/False) for each server socket. tcp_keepalive_count = None integer value Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X. tcp_keepalive_interval = None integer value Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. trace_flags = None list value List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. transfer_api_class = cinder.transfer.api.API string value The full class name of the volume transfer API class transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html until_refresh = 0 integer value Count of reservations until usage is refreshed use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_chap_auth = False boolean value Option to enable/disable CHAP authentication for targets. use_default_quota_class = True boolean value Enables or disables use of default quota class with default quota. use_eventlog = False boolean value Log output to Windows Event Log. use_forwarded_for = False boolean value Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy. use_multipath_for_image_xfer = False boolean value Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. verify_glance_signatures = enabled string value Enable image signature verification. Cinder uses the image signature metadata from Glance and verifies the signature of a signed image while downloading that image. There are two options here. enabled : verify when image has signature metadata. disabled : verification is turned off. If the image signature cannot be verified or if the image signature metadata is incomplete when required, then Cinder will not create the volume and update it into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create volumes. volume_api_class = cinder.volume.api.API string value The full class name of the volume API class to use volume_backend_name = None string value The backend name for a given driver implementation volume_clear = zero string value Method used to wipe old volumes volume_clear_ionice = None string value The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority. volume_clear_size = 0 integer value Size in MiB to wipe at start of old volumes. 1024 MiB at max. 0 ⇒ all volume_copy_blkio_cgroup_name = cinder-volume-copy string value The blkio cgroup name to be used to limit bandwidth of volume copy volume_copy_bps_limit = 0 integer value The upper limit of bandwidth of volume copy. 0 ⇒ unlimited volume_dd_blocksize = 1M string value The default block size used when copying/clearing volumes volume_manager = cinder.volume.manager.VolumeManager string value Full class name for the Manager for volume volume_name_template = volume-%s string value Template string to be used to generate volume names volume_number_multiplier = -1.0 floating point value Multiplier used for weighing volume number. Negative numbers mean to spread vs stack. volume_service_inithost_offload = False boolean value Offload pending volume delete during volume service startup volume_transfer_key_length = 16 integer value The number of characters in the autogenerated auth key. volume_transfer_salt_length = 8 integer value The number of characters in the salt. volume_usage_audit_period = month string value Time period for which to generate volume usages. The options are hour, day, month, or year. volumes_dir = USDstate_path/volumes string value Volume configuration file storage directory vrts_lun_sparse = True boolean value Create sparse Lun. vrts_target_config = /etc/cinder/vrts_target.xml string value VA config file. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. zoning_mode = None string value FC Zoning mode configured, only fabric is supported now. 2.1.2. backend The following table outlines the options available under the [backend] group in the /etc/cinder/cinder.conf file. Table 2.1. backend Configuration option = Default value Type Description backend_host = None string value Backend override of host value. 2.1.3. backend_defaults The following table outlines the options available under the [backend_defaults] group in the /etc/cinder/cinder.conf file. Table 2.2. backend_defaults Configuration option = Default value Type Description auto_calc_max_oversubscription_ratio = False boolean value K2 driver will calculate max_oversubscription_ratio on setting this option as True. backend_availability_zone = None string value Availability zone for this volume backend. If not set, the storage_availability_zone option value is used as the default for all backends. backend_native_threads_pool_size = 20 integer value Size of the native threads pool for the backend. Increase for backends that heavily rely on this, like the RBD driver. chap = disabled string value CHAP authentication mode, effective only for iscsi (disabled|enabled) `chap_password = ` string value Password for specified CHAP account name. `chap_username = ` string value CHAP user name. check_max_pool_luns_threshold = False boolean value DEPRECATED: Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False. chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf string value Chiscsi (CXT) global defaults configuration file cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml string value config file for cinder eternus_dx volume driver cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml string value The configuration file for the Cinder Huawei driver. connection_type = iscsi string value Connection type to the IBM Storage Array cycle_period_seconds = 300 integer value This defines an optional cycle period that applies to Global Mirror relationships with a cycling mode of multi. A Global Mirror relationship using the multi cycling_mode performs a complete cycle at most once each period. The default is 300 seconds, and the valid seconds are 60-86400. datera_503_interval = 5 integer value Interval between 503 retries datera_503_timeout = 120 integer value Timeout for HTTP 503 retry messages datera_api_port = 7717 string value Datera API port. datera_api_version = 2 string value Datera API version. datera_debug = False boolean value True to set function arg and return logging datera_debug_replica_count_override = False boolean value ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1 datera_disable_profiler = False boolean value Set to True to disable profiling in the Datera driver datera_tenant_id = None string value If set to Map --> OpenStack project ID will be mapped implicitly to Datera tenant ID If set to None --> Datera tenant ID will not be used during volume provisioning If set to anything else --> Datera tenant ID will be the provided value default_timeout = 31536000 integer value Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long. deferred_deletion_delay = 0 integer value Time delay in seconds before a volume is eligible for permanent removal after being tagged for deferred deletion. deferred_deletion_purge_interval = 60 integer value Number of seconds between runs of the periodic task to purge volumes tagged for deletion. dell_api_async_rest_timeout = 15 integer value Dell SC API async call default timeout in seconds. dell_api_sync_rest_timeout = 30 integer value Dell SC API sync call default timeout in seconds. dell_sc_api_port = 3033 port value Dell API port dell_sc_server_folder = openstack string value Name of the server folder to use on the Storage Center dell_sc_ssn = 64702 integer value Storage Center System Serial Number dell_sc_verify_cert = False boolean value Enable HTTPS SC certificate verification dell_sc_volume_folder = openstack string value Name of the volume folder to use on the Storage Center dell_server_os = Red Hat Linux 6.x string value Server OS type to use when creating a new server on the Storage Center. destroy_empty_storage_group = False boolean value To destroy storage group when the last LUN is removed from it. By default, the value is False. disable_discovery = False boolean value Disabling iSCSI discovery (sendtargets) for multipath connections on K2 driver. `dpl_pool = ` string value DPL pool uuid in which DPL volumes are stored. dpl_port = 8357 port value DPL port number. driver_client_cert = None string value The path to the client certificate for verification, if the driver supports it. driver_client_cert_key = None string value The path to the client certificate key for verification, if the driver supports it. driver_data_namespace = None string value Namespace for driver private data values to be saved in. driver_ssl_cert_path = None string value Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend driver_ssl_cert_verify = False boolean value If set to True the http client will validate the SSL certificate of the backend endpoint. driver_use_ssl = False boolean value Tell driver to use SSL for connection to backend storage if the driver supports it. `ds8k_devadd_unitadd_mapping = ` string value Mapping between IODevice address and unit address. ds8k_host_type = auto string value Set to zLinux if your OpenStack version is prior to Liberty and you're connecting to zLinux systems. Otherwise set to auto. Valid values for this parameter are: auto , AMDLinuxRHEL , AMDLinuxSuse , AppleOSX , Fujitsu , Hp , HpTru64 , HpVms , LinuxDT , LinuxRF , LinuxRHEL , LinuxSuse , Novell , SGI , SVC , SanFsAIX , SanFsLinux , Sun , VMWare , Win2000 , Win2003 , Win2008 , Win2012 , iLinux , nSeries , pLinux , pSeries , pSeriesPowerswap , zLinux , iSeries . ds8k_ssid_prefix = FF string value Set the first two digits of SSID. enable_deferred_deletion = False boolean value Enable deferred deletion. Upon deletion, volumes are tagged for deletion but will only be removed asynchronously at a later time. enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. enforce_multipath_for_image_xfer = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. eqlx_cli_max_retries = 5 integer value Maximum retry count for reconnection. Default is 5. eqlx_group_name = group-0 string value Group name to use for creating volumes. Defaults to "group-0". eqlx_pool = default string value Pool in which volumes will be created. Defaults to "default". excluded_domain_ip = None IP address value DEPRECATED: Fault Domain IP to be excluded from iSCSI returns. excluded_domain_ips = [] list value Comma separated Fault Domain IPs to be excluded from iSCSI returns. expiry_thres_minutes = 720 integer value This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. extra_capabilities = {} string value User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties. filter_function = None string value String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. flashsystem_connection_protocol = FC string value Connection protocol should be FC. (Default is FC.) flashsystem_iscsi_portid = 0 integer value Default iSCSI Port ID of FlashSystem. (Default port is 0.) flashsystem_multihostmap_enabled = True boolean value Allows vdisk to multi host mapping. (Default is True) force_delete_lun_in_storagegroup = True boolean value Delete a LUN even if it is in Storage Groups. goodness_function = None string value String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. gpfs_hosts = [] list value Comma-separated list of IP address or hostnames of GPFS nodes. gpfs_hosts_key_file = USDstate_path/ssh_known_hosts string value File containing SSH host keys for the gpfs nodes with which driver needs to communicate. Default=USDstate_path/ssh_known_hosts gpfs_images_dir = None string value Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. gpfs_images_share_mode = None string value Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. gpfs_max_clone_depth = 0 integer value Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. gpfs_mount_point_base = None string value Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. `gpfs_private_key = ` string value Filename of private key to use for SSH authentication. gpfs_sparse_volumes = True boolean value Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. gpfs_ssh_port = 22 port value SSH port to use. gpfs_storage_pool = system string value Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. gpfs_strict_host_key_policy = False boolean value Option to enable strict gpfs host key checking while connecting to gpfs nodes. Default=False gpfs_user_login = root string value Username for GPFS nodes. `gpfs_user_password = ` string value Password for GPFS node user. `hpe3par_api_url = ` string value WSAPI Server URL. This setting applies to both 3PAR and Primera. Example 1: for 3PAR, URL is: https://<3par ip>:8080/api/v1 Example 2: for Primera, URL is: https://<primera ip>:443/api/v1 hpe3par_cpg = ['OpenStack'] list value List of the 3PAR / Primera CPG(s) to use for volume creation `hpe3par_cpg_snap = ` string value The 3PAR / Primera CPG to use for snapshots of volumes. If empty the userCPG will be used. hpe3par_debug = False boolean value Enable HTTP debugging to 3PAR / Primera hpe3par_iscsi_chap_enabled = False boolean value Enable CHAP authentication for iSCSI connections. hpe3par_iscsi_ips = [] list value List of target iSCSI addresses to use. `hpe3par_password = ` string value 3PAR / Primera password for the user specified in hpe3par_username `hpe3par_snapshot_expiration = ` string value The time in hours when a snapshot expires and is deleted. This must be larger than expiration `hpe3par_snapshot_retention = ` string value The time in hours to retain a snapshot. You can't delete it before this expires. `hpe3par_target_nsp = ` string value The nsp of 3PAR backend to be used when: (1) multipath is not enabled in cinder.conf. (2) Fiber Channel Zone Manager is not used. (3) the 3PAR backend is prezoned with this specific nsp only. For example if nsp is 2 1 2, the format of the option's value is 2:1:2 `hpe3par_username = ` string value 3PAR / Primera username with the edit role hpelefthand_api_url = None uri value HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos hpelefthand_clustername = None string value HPE LeftHand cluster name hpelefthand_debug = False boolean value Enable HTTP debugging to LeftHand hpelefthand_iscsi_chap_enabled = False boolean value Configure CHAP authentication for iSCSI connections (Default: Disabled) hpelefthand_password = None string value HPE LeftHand Super user password hpelefthand_ssh_port = 16022 port value Port number of SSH service. hpelefthand_username = None string value HPE LeftHand Super user username hpmsa_api_protocol = https string value HPMSA API interface protocol. hpmsa_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. hpmsa_pool_name = A string value Pool or Vdisk name to use for volume creation. hpmsa_pool_type = virtual string value linear (for Vdisk) or virtual (for Pool). hpmsa_verify_certificate = False boolean value Whether to verify HPMSA array SSL certificate. hpmsa_verify_certificate_path = None string value HPMSA array SSL certificate path. hypermetro_devices = None string value The remote device hypermetro will use. iet_conf = /etc/iet/ietd.conf string value IET configuration file ignore_pool_full_threshold = False boolean value Force LUN creation even if the full threshold of pool is reached. By default, the value is False. image_upload_use_cinder_backend = False boolean value If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service. image_upload_use_internal_tenant = False boolean value If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant. image_volume_cache_enabled = False boolean value Enable the image volume cache for this backend. image_volume_cache_max_count = 0 integer value Max number of entries allowed in the image volume cache. 0 ⇒ unlimited. image_volume_cache_max_size_gb = 0 integer value Max size of the image volume cache for this backend in GB. 0 ⇒ unlimited. infinidat_iscsi_netspaces = [] list value List of names of network spaces to use for iSCSI connectivity infinidat_pool_name = None string value Name of the pool from which volumes are allocated infinidat_storage_protocol = fc string value Protocol for transferring data between host and storage back-end. infinidat_use_compression = False boolean value Specifies whether to turn on compression for newly created volumes. initiator_auto_deregistration = False boolean value Automatically deregister initiators after the related storage group is destroyed. By default, the value is False. initiator_auto_registration = False boolean value Automatically register initiators. By default, the value is False. initiator_check = False boolean value Use this value to enable the initiator_check. interval = 3 integer value Use this value to specify length of the interval in seconds. io_port_list = None list value Comma separated iSCSI or FC ports to be used in Nova or Cinder. iscsi_initiators = None string value Mapping between hostname and its iSCSI initiator IP addresses. iscsi_iotype = fileio string value Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device iscsi_secondary_ip_addresses = [] list value The list of secondary IP addresses of the iSCSI daemon `iscsi_target_flags = ` string value Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. iscsi_write_cache = on string value Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if target_helper is set to tgtadm. iser_helper = tgtadm string value The name of the iSER target user-land tool to use iser_ip_address = USDmy_ip string value The IP address that the iSER daemon is listening on iser_port = 3260 port value The port that the iSER daemon is listening on iser_target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSER volumes lenovo_api_protocol = https string value Lenovo api interface protocol. lenovo_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. lenovo_pool_name = A string value Pool or Vdisk name to use for volume creation. lenovo_pool_type = virtual string value linear (for VDisk) or virtual (for Pool). lenovo_verify_certificate = False boolean value Whether to verify Lenovo array SSL certificate. lenovo_verify_certificate_path = None string value Lenovo array SSL certificate path. linstor_controller_diskless = True boolean value True means Cinder node is a diskless LINSTOR node. linstor_default_blocksize = 4096 integer value Default Block size for Image restoration. When using iSCSI transport, this option specifies the block size linstor_default_storage_pool_name = DfltStorPool string value Default Storage Pool name for LINSTOR. linstor_default_uri = linstor://localhost string value Default storage URI for LINSTOR. linstor_default_volume_group_name = drbd-vg string value Default Volume Group name for LINSTOR. Not Cinder Volume. linstor_volume_downsize_factor = 4096 floating point value Default volume downscale size in KiB = 4 MiB. `lss_range_for_cg = ` string value Reserve LSSs for consistency group. lvm_conf_file = /etc/cinder/lvm.conf string value LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify None to not use a conf file even if one exists). lvm_mirrors = 0 integer value If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space lvm_suppress_fd_warnings = False boolean value Suppress leaked file descriptor warnings in LVM commands. lvm_type = auto string value Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported. macrosan_client = None list value Macrosan iscsi_clients list. You can configure multiple clients. You can configure it in this format: (host; client_name; sp1_iscsi_port; sp2_iscsi_port), (host; client_name; sp1_iscsi_port; sp2_iscsi_port) Important warning, Client_name has the following requirements: [a-zA-Z0-9.-_:], the maximum number of characters is 31 E.g: (controller1; device1; eth-1:0; eth-2:0), (controller2; device2; eth-1:0/eth-1:1; eth-2:0/eth-2:1), macrosan_client_default = None string value This is the default connection ports' name for iscsi. This default configuration is used when no host related information is obtained.E.g: eth-1:0/eth-1:1; eth-2:0/eth-2:1 macrosan_fc_keep_mapped_ports = True boolean value In the case of an FC connection, the configuration item associated with the port is maintained. macrosan_fc_use_sp_port_nr = 1 integer value The use_sp_port_nr parameter is the number of online FC ports used by the single-ended memory when the FC connection is established in the switch non-all-pass mode. The maximum is 4 macrosan_force_unmap_itl = True boolean value Force disconnect while deleting volume macrosan_log_timing = True boolean value Whether enable log timing macrosan_pool = None string value Pool to use for volume creation macrosan_replication_destination_ports = None list value Slave device macrosan_replication_ipaddrs = None list value MacroSAN replication devices' ip addresses macrosan_replication_password = None string value MacroSAN replication devices' password macrosan_replication_username = None string value MacroSAN replication devices' username macrosan_sdas_ipaddrs = None list value MacroSAN sdas devices' ip addresses macrosan_sdas_password = None string value MacroSAN sdas devices' password macrosan_sdas_username = None string value MacroSAN sdas devices' username macrosan_snapshot_resource_ratio = 1.0 floating point value Set snapshot's resource ratio macrosan_thin_lun_extent_size = 8 integer value Set the thin lun's extent size macrosan_thin_lun_high_watermark = 20 integer value Set the thin lun's high watermark macrosan_thin_lun_low_watermark = 5 integer value Set the thin lun's low watermark `management_ips = ` string value List of Management IP addresses (separated by commas) max_luns_per_storage_group = 255 integer value Default max number of LUNs in a storage group. By default, the value is 255. max_over_subscription_ratio = 20.0 string value Representation of the over subscription ratio when thin provisioning is enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. If ratio is auto , Cinder will automatically calculate the ratio based on the provisioned capacity and the used space. If not set to auto, the ratio has to be a minimum of 1.0. metro_domain_name = None string value The remote metro device domain name. metro_san_address = None string value The remote metro device request url. metro_san_password = None string value The remote metro device san password. metro_san_user = None string value The remote metro device san user. metro_storage_pools = None string value The remote metro device pool names. `nas_host = ` string value IP address or Hostname of NAS system. nas_login = admin string value User name to connect to NAS system. nas_mount_options = None string value Options used to mount the storage backend file system where Cinder volumes are stored. `nas_password = ` string value Password to connect to NAS system. `nas_private_key = ` string value Filename of private key to use for SSH authentication. nas_secure_file_operations = auto string value Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. nas_secure_file_permissions = auto string value Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. `nas_share_path = ` string value Path to the share to use for storing Cinder volumes. For example: "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 . nas_ssh_port = 22 port value SSH port to use to connect to NAS system. nas_volume_prov_type = thin string value Provisioning type that will be used when creating volumes. naviseccli_path = None string value Naviseccli Path. netapp_api_trace_pattern = (.*) string value A regular expression to limit the API tracing. This option is honored only if enabling api tracing with the trace_flags option. By default, all APIs will be traced. netapp_copyoffload_tool_path = None string value This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. netapp_host_type = None string value This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. netapp_login = None string value Administrative user account name used to access the storage system or proxy server. netapp_lun_ostype = None string value This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. netapp_lun_space_reservation = enabled string value This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. netapp_password = None string value Password for the administrative user account specified in the netapp_login option. netapp_pool_name_search_pattern = (.+) string value This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. netapp_replication_aggregate_map = None dict value Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... netapp_server_hostname = None string value The hostname (or IP address) for the storage system or proxy server. netapp_server_port = None integer value The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS. netapp_size_multiplier = 1.2 floating point value The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. netapp_snapmirror_quiesce_timeout = 3600 integer value The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. netapp_storage_family = ontap_cluster string value The storage family type used on the storage system; the only valid value is ontap_cluster for using clustered Data ONTAP. netapp_storage_protocol = None string value The storage protocol to be used on the data path with the storage system. netapp_transport_type = http string value The transport protocol used when communicating with the storage system or proxy server. netapp_vserver = None string value This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. nexenta_blocksize = 4096 integer value Block size for datasets nexenta_chunksize = 32768 integer value NexentaEdge iSCSI LUN object chunk size `nexenta_client_address = ` string value NexentaEdge iSCSI Gateway client address for non-VIP service nexenta_dataset_compression = on string value Compression value for new ZFS folders. nexenta_dataset_dedup = off string value Deduplication value for new ZFS folders. `nexenta_dataset_description = ` string value Human-readable description for the folder. nexenta_encryption = False boolean value Defines whether NexentaEdge iSCSI LUN object has encryption enabled. `nexenta_folder = ` string value A folder where cinder created datasets will reside. nexenta_group_snapshot_template = group-snapshot-%s string value Template string to generate group snapshot name `nexenta_host = ` string value IP address of NexentaStor Appliance nexenta_host_group_prefix = cinder string value Prefix for iSCSI host groups on NexentaStor nexenta_iops_limit = 0 integer value NexentaEdge iSCSI LUN object IOPS limit `nexenta_iscsi_service = ` string value NexentaEdge iSCSI service name nexenta_iscsi_target_host_group = all string value Group of hosts which are allowed to access volumes `nexenta_iscsi_target_portal_groups = ` string value NexentaStor target portal groups nexenta_iscsi_target_portal_port = 3260 integer value Nexenta appliance iSCSI target portal port `nexenta_iscsi_target_portals = ` string value Comma separated list of portals for NexentaStor5, in format of IP1:port1,IP2:port2. Port is optional, default=3260. Example: 10.10.10.1:3267,10.10.1.2 nexenta_lu_writebackcache_disabled = False boolean value Postponed write to backing store or not `nexenta_lun_container = ` string value NexentaEdge logical path of bucket for LUNs nexenta_luns_per_target = 100 integer value Amount of LUNs per iSCSI target nexenta_mount_point_base = USDstate_path/mnt string value Base directory that contains NFS share mount points nexenta_nbd_symlinks_dir = /dev/disk/by-path string value NexentaEdge logical path of directory to store symbolic links to NBDs nexenta_nms_cache_volroot = True boolean value If set True cache NexentaStor appliance volroot option value. nexenta_ns5_blocksize = 32 integer value Block size for datasets nexenta_origin_snapshot_template = origin-snapshot-%s string value Template string to generate origin name of clone nexenta_password = nexenta string value Password to connect to NexentaStor management REST API server nexenta_qcow2_volumes = False boolean value Create volumes as QCOW2 files rather than raw files nexenta_replication_count = 3 integer value NexentaEdge iSCSI LUN object replication count. `nexenta_rest_address = ` string value IP address of NexentaStor management REST API endpoint nexenta_rest_backoff_factor = 0.5 floating point value Specifies the backoff factor to apply between connection attempts to NexentaStor management REST API server nexenta_rest_connect_timeout = 30 floating point value Specifies the time limit (in seconds), within which the connection to NexentaStor management REST API server must be established nexenta_rest_password = nexenta string value Password to connect to NexentaEdge. nexenta_rest_port = 0 integer value HTTP(S) port to connect to NexentaStor management REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used nexenta_rest_protocol = auto string value Use http or https for NexentaStor management REST API connection (default auto) nexenta_rest_read_timeout = 300 floating point value Specifies the time limit (in seconds), within which NexentaStor management REST API server must send a response nexenta_rest_retry_count = 3 integer value Specifies the number of times to repeat NexentaStor management REST API call in case of connection errors and NexentaStor appliance EBUSY or ENOENT errors nexenta_rest_user = admin string value User name to connect to NexentaEdge. nexenta_rrmgr_compression = 0 integer value Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression. nexenta_rrmgr_connections = 2 integer value Number of TCP connections. nexenta_rrmgr_tcp_buf_size = 4096 integer value TCP Buffer size in KiloBytes. nexenta_shares_config = /etc/cinder/nfs_shares string value File with the list of available nfs shares nexenta_sparse = False boolean value Enables or disables the creation of sparse datasets nexenta_sparsed_volumes = True boolean value Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time. nexenta_target_group_prefix = cinder string value Prefix for iSCSI target groups on NexentaStor nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder string value iqn prefix for NexentaStor iSCSI targets nexenta_use_https = True boolean value Use HTTP secure protocol for NexentaStor management REST API connections nexenta_user = admin string value User name to connect to NexentaStor management REST API server nexenta_volume = cinder string value NexentaStor pool name that holds all volumes nexenta_volume_group = iscsi string value Volume group for NexentaStor5 iSCSI nfs_mount_attempts = 3 integer value The number of attempts to mount NFS shares before raising an error. At least one attempt will be made to mount an NFS share, regardless of the value specified. nfs_mount_options = None string value Mount options passed to the NFS client. See section of the NFS man page for details. nfs_mount_point_base = USDstate_path/mnt string value Base dir containing mount points for NFS shares. nfs_qcow2_volumes = False boolean value Create volumes as QCOW2 files rather than raw files. nfs_shares_config = /etc/cinder/nfs_shares string value File with the list of available NFS shares. nfs_snapshot_support = False boolean value Enable support for snapshots on the NFS driver. Platforms using libvirt <1.2.7 will encounter issues with this feature. nfs_sparsed_volumes = True boolean value Create volumes as sparsed files which take no space. If set to False volume is created as regular file. In such case volume creation takes a lot of time. nimble_pool_name = default string value Nimble Controller pool name nimble_subnet_label = * string value Nimble Subnet Label nimble_verify_cert_path = None string value Path to Nimble Array SSL certificate nimble_verify_certificate = False boolean value Whether to verify Nimble SSL Certificate num_iser_scan_tries = 3 integer value The maximum number of times to rescan iSER target to find volume num_shell_tries = 3 integer value Number of times to attempt to run flakey shell commands num_volume_device_scan_tries = 3 integer value The maximum number of times to rescan targets to find volume nvmet_ns_id = 10 integer value The namespace id associated with the subsystem that will be created with the path for the LVM volume. nvmet_port_id = 1 port value The port that the NVMe target is listening on. powermax_array = None string value Serial number of the array to connect to. powermax_port_groups = None list value List of port groups containing frontend ports configured prior for server connection. powermax_service_level = None string value Service level to use for provisioning storage. Setting this as an extra spec in pool_name is preferable. powermax_snapvx_unlink_limit = 3 integer value Use this value to specify the maximum number of unlinks for the temporary snapshots before a clone operation. powermax_srp = None string value Storage resource pool on array to use for provisioning. proxy = cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy string value Proxy driver that connects to the IBM Storage Array pure_api_token = None string value REST API authorization token. pure_automatic_max_oversubscription_ratio = True boolean value Automatically determine an oversubscription ratio based on the current total data reduction values. If used this calculated value will override the max_over_subscription_ratio config option. pure_eradicate_on_delete = False boolean value When enabled, all Pure volumes, snapshots, and protection groups will be eradicated at the time of deletion in Cinder. Data will NOT be recoverable after a delete with this set to True! When disabled, volumes and snapshots will go into pending eradication state and can be recovered. pure_host_personality = None string value Determines how the Purity system tunes the protocol used between the array and the initiator. pure_iscsi_cidr = 0.0.0.0/0 string value CIDR of FlashArray iSCSI targets hosts are allowed to connect to. Default will allow connection to any IP address. pure_replica_interval_default = 3600 integer value Snapshot replication interval in seconds. pure_replica_retention_long_term_default = 7 integer value Retain snapshots per day on target for this time (in days.) pure_replica_retention_long_term_per_day_default = 3 integer value Retain how many snapshots for each day. pure_replica_retention_short_term_default = 14400 integer value Retain all snapshots on target for this time (in seconds.) pure_replication_pg_name = cinder-group string value Pure Protection Group name to use for async replication (will be created if it does not exist). pure_replication_pod_name = cinder-pod string value Pure Pod name to use for sync replication (will be created if it does not exist). qnap_management_url = None uri value The URL to management QNAP Storage. Driver does not support IPv6 address in URL. qnap_poolname = None string value The pool name in the QNAP Storage qnap_storage_protocol = iscsi string value Communication protocol to access QNAP storage quobyte_client_cfg = None string value Path to a Quobyte Client configuration file. quobyte_mount_point_base = USDstate_path/mnt string value Base dir containing the mount point for the Quobyte volume. quobyte_overlay_volumes = False boolean value Create new volumes from the volume_from_snapshot_cache by creating overlay files instead of full copies. This speeds up the creation of volumes from this cache. This feature requires the options quobyte_qcow2_volumes and quobyte_volume_from_snapshot_cache to be set to True. If one of these is set to False this option is ignored. quobyte_qcow2_volumes = True boolean value Create volumes as QCOW2 files rather than raw files. quobyte_sparsed_volumes = True boolean value Create volumes as sparse files which take no space. If set to False, volume is created as regular file. quobyte_volume_from_snapshot_cache = False boolean value Create a cache of volumes from merged snapshots to speed up creation of multiple volumes from a single snapshot. quobyte_volume_url = None string value Quobyte URL to the Quobyte volume using e.g. a DNS SRV record (preferred) or a host list (alternatively) like quobyte://<DIR host1>, <DIR host2>/<volume name> rados_connect_timeout = -1 integer value Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. rados_connection_interval = 5 integer value Interval value (in seconds) between connection retries to ceph cluster. rados_connection_retries = 3 integer value Number of retries if connection to ceph cluster failed. `rbd_ceph_conf = ` string value Path to the ceph configuration file rbd_cluster_name = ceph string value The name of ceph cluster rbd_exclusive_cinder_pool = False boolean value Set to True if the pool is used exclusively by Cinder. On exclusive use driver won't query images' provisioned size as they will match the value calculated by the Cinder core code for allocated_capacity_gb. This reduces the load on the Ceph cluster as well as on the volume service. rbd_flatten_volume_from_snapshot = False boolean value Flatten volumes created from snapshots to remove dependency from volume to snapshot `rbd_keyring_conf = ` string value Path to the ceph keyring file rbd_max_clone_depth = 5 integer value Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. rbd_pool = rbd string value The RADOS pool where rbd volumes are stored rbd_secret_uuid = None string value The libvirt uuid of the secret for the rbd_user volumes rbd_store_chunk_size = 4 integer value Volumes will be chunked into objects of this size (in megabytes). rbd_user = None string value The RADOS client name for accessing rbd volumes - only set when using cephx authentication remove_empty_host = False boolean value To remove the host from Unity when the last LUN is detached from it. By default, it is False. replication_connect_timeout = 5 integer value Timeout value (in seconds) used when connecting to ceph cluster to do a demotion/promotion of volumes. If value < 0, no timeout is set and default librados value is used. replication_device = None dict value Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2... report_discard_supported = False boolean value Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. report_dynamic_total_capacity = True boolean value Set to True for driver to report total capacity as a dynamic value (used + current free) and to False to report a static value (quota max bytes if defined and global size of cluster if not). reserved_percentage = 0 integer value The percentage of backend capacity is reserved retries = 200 integer value Use this value to specify number of retries. san_api_port = None port value Port to use to access the SAN API `san_clustername = ` string value Cluster name to use for creating volumes `san_ip = ` string value IP address of SAN controller san_is_local = False boolean value Execute commands locally instead of over SSH; use if the volume service is running on the SAN device san_login = admin string value Username for SAN controller `san_password = ` string value Password for SAN controller `san_private_key = ` string value Filename of private key to use for SSH authentication san_ssh_port = 22 port value SSH port to use with SAN san_thin_provision = True boolean value Use thin provisioning for SAN volumes? scst_target_driver = iscsi string value SCST target implementation can choose from multiple SCST target drivers. scst_target_iqn_name = None string value Certain ISCSI targets have predefined target names, SCST target driver uses this name. seagate_iscsi_ips = [] list value List of comma-separated target iSCSI IP addresses. seagate_pool_name = A string value Pool or vdisk name to use for volume creation. seagate_pool_type = virtual string value linear (for vdisk) or virtual (for virtual pool). `secondary_san_ip = ` string value IP address of secondary DSM controller secondary_san_login = Admin string value Secondary DSM user name `secondary_san_password = ` string value Secondary DSM user password name secondary_sc_api_port = 3033 port value Secondary Dell API port sf_account_prefix = None string value Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname ( default behavior). The default is NO prefix. sf_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create sf_api_port = 443 port value SolidFire API port. Useful if the device api is behind a proxy on a different port. sf_emulate_512 = True boolean value Set 512 byte emulation on volume creation; sf_enable_vag = False boolean value Utilize volume access groups on a per-tenant basis. sf_provisioning_calc = maxProvisionedSpace string value Change how SolidFire reports used space and provisioning calculations. If this parameter is set to usedSpace , the driver will report correct values as expected by Cinder thin provisioning. sf_svip = None string value Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud. sf_volume_prefix = UUID- string value Create SolidFire volumes with this prefix. Volume names are of the form <sf_volume_prefix><cinder-volume-id>. The default is to use a prefix of UUID- . sheepdog_store_address = 127.0.0.1 string value IP address of sheep daemon. sheepdog_store_port = 7000 port value Port of sheep daemon. sio_allow_non_padded_volumes = False boolean value renamed to vxflexos_allow_non_padded_volumes. sio_max_over_subscription_ratio = 10.0 floating point value renamed to vxflexos_max_over_subscription_ratio. sio_rest_server_port = 443 port value renamed to vxflexos_rest_server_port. sio_round_volume_capacity = True boolean value renamed to vxflexos_round_volume_capacity. sio_server_api_version = None string value renamed to vxflexos_server_api_version. sio_server_certificate_path = None string value Deprecated, use driver_ssl_cert_path instead. sio_storage_pools = None string value renamed to vxflexos_storage_pools. sio_unmap_volume_before_deletion = False boolean value renamed to vxflexos_unmap_volume_before_deletion. sio_verify_server_certificate = False boolean value Deprecated, use driver_ssl_cert_verify instead. smbfs_default_volume_format = vhd string value Default format that will be used when creating volumes if no volume format is specified. smbfs_mount_point_base = C:\OpenStack\_mnt string value Base dir containing mount points for smbfs shares. smbfs_pool_mappings = {} dict value Mappings between share locations and pool names. If not specified, the share names will be used as pool names. Example: //addr/share:pool_name,//addr/share2:pool_name2 smbfs_shares_config = C:\OpenStack\smbfs_shares.txt string value File with the list of available smbfs shares. spdk_max_queue_depth = 64 integer value Queue depth for rdma transport. spdk_rpc_ip = None string value The NVMe target remote configuration IP address. spdk_rpc_password = None string value The NVMe target remote configuration password. spdk_rpc_port = 8000 port value The NVMe target remote configuration port. spdk_rpc_username = None string value The NVMe target remote configuration username. ssh_conn_timeout = 30 integer value SSH connection timeout in seconds ssh_max_pool_conn = 5 integer value Maximum ssh connections in the pool ssh_min_pool_conn = 1 integer value Minimum ssh connections in the pool storage_protocol = iscsi string value Protocol for transferring data between host and storage back-end. storage_vnx_authentication_type = global string value VNX authentication scope type. By default, the value is global. storage_vnx_pool_names = None list value Comma-separated list of storage pool names to be used. storage_vnx_security_file_dir = None string value Directory path that contains the VNX security file. Make sure the security file is generated first. storwize_peer_pool = None string value Specifies the name of the peer pool for hyperswap volume, the peer pool must exist on the other site. storwize_preferred_host_site = {} dict value Specifies the site information for host. One WWPN or multi WWPNs used in the host can be specified. For example: storwize_preferred_host_site=site1:wwpn1,site2:wwpn2&wwpn3 or storwize_preferred_host_site=site1:iqn1,site2:iqn2 storwize_san_secondary_ip = None string value Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. storwize_svc_allow_tenant_qos = False boolean value Allow tenants to specify QOS on create storwize_svc_flashcopy_rate = 50 integer value Specifies the Storwize FlashCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-150. storwize_svc_flashcopy_timeout = 120 integer value Maximum number of seconds to wait for FlashCopy to be prepared. storwize_svc_iscsi_chap_enabled = True boolean value Configure CHAP authentication for iSCSI connections (Default: Enabled) storwize_svc_mirror_pool = None string value Specifies the name of the pool in which mirrored copy is stored. Example: "pool2" storwize_svc_multihostmap_enabled = True boolean value This option no longer has any affect. It is deprecated and will be removed in the release. storwize_svc_multipath_enabled = False boolean value Connect with multipath (FC only; iSCSI multipath is controlled by Nova) storwize_svc_stretched_cluster_partner = None string value If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2" storwize_svc_vol_autoexpand = True boolean value Storage system autoexpand parameter for volumes (True/False) storwize_svc_vol_compression = False boolean value Storage system compression option for volumes storwize_svc_vol_easytier = True boolean value Enable Easy Tier for volumes storwize_svc_vol_grainsize = 256 integer value Storage system grain size parameter for volumes (8/32/64/128/256) storwize_svc_vol_iogrp = 0 string value The I/O group in which to allocate volumes. It can be a comma-separated list in which case the driver will select an io_group based on least number of volumes associated with the io_group. storwize_svc_vol_nofmtdisk = False boolean value Specifies that the volume not be formatted during creation. storwize_svc_vol_rsize = 2 integer value Storage system space-efficiency parameter for volumes (percentage) storwize_svc_vol_warning = 0 integer value Storage system threshold for volume capacity warnings (percentage) storwize_svc_volpool_name = ['volpool'] list value Comma separated list of storage system storage pools for volumes. suppress_requests_ssl_warnings = False boolean value Suppress requests library SSL certificate warnings. synology_admin_port = 5000 port value Management port for Synology storage. synology_device_id = None string value Device id for skip one time password check for logging in Synology storage if OTP is enabled. synology_one_time_pass = None string value One time password of administrator for logging in Synology storage if OTP is enabled. `synology_password = ` string value Password of administrator for logging in Synology storage. `synology_pool_name = ` string value Volume on Synology storage to be used for creating lun. synology_ssl_verify = True boolean value Do certificate validation or not if USDdriver_use_ssl is True synology_username = admin string value Administrator of Synology storage. target_helper = tgtadm string value Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, spdk-nvmeof for SPDK NVMe-oF, or fake for testing. target_ip_address = USDmy_ip string value The IP address that the iSCSI daemon is listening on target_port = 3260 port value The port that the iSCSI daemon is listening on target_prefix = iqn.2010-10.org.openstack: string value Prefix for iSCSI volumes target_protocol = iscsi string value Determines the target protocol for new volumes, created with tgtadm, lioadm and nvmet target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser", in case of nvmet target set to "nvmet_rdma". thres_avl_size_perc_start = 20 integer value If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. thres_avl_size_perc_stop = 60 integer value When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. trace_flags = None list value List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. u4p_failover_autofailback = True boolean value If the driver should automatically failback to the primary instance of Unisphere when a successful connection is re-established. u4p_failover_backoff_factor = 1 integer value A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). Retries will sleep for: {backoff factor} * (2 ^ ({number of total retries} - 1)) seconds. u4p_failover_retries = 3 integer value The maximum number of retries each connection should attempt. Note, this applies only to failed DNS lookups, socket connections and connection timeouts, never to requests where data has made it to the server. u4p_failover_target = None dict value Dictionary of Unisphere failover target info. u4p_failover_timeout = 20.0 integer value How long to wait for the server to send data before giving up. unique_fqdn_network = True boolean value Whether or not our private network has unique FQDN on each initiator or not. For example networks with QA systems usually have multiple servers/VMs with the same FQDN. When true this will create host entries on K2 using the FQDN, when false it will use the reversed IQN/WWNN. unity_io_ports = [] list value A comma-separated list of iSCSI or FC ports to be used. Each port can be Unix-style glob expressions. unity_storage_pool_names = [] list value A comma-separated list of storage pool names to be used. use_chap_auth = False boolean value Option to enable/disable CHAP authentication for targets. use_multipath_for_image_xfer = False boolean value Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? vmax_array = None string value DEPRECATED: vmax_array. vmax_port_groups = None list value DEPRECATED: vmax_port_groups. vmax_service_level = None string value DEPRECATED: vmax_service_level. vmax_snapvx_unlink_limit = 3 integer value DEPRECATED: vmax_snapvc_unlink_limit. vmax_srp = None string value DEPRECATED: vmax_srp. vmax_workload = None string value Workload, setting this as an extra spec in pool_name is preferable. vmware_adapter_type = lsiLogic string value Default adapter type to be used for attaching volumes. vmware_api_retry_count = 10 integer value Number of times VMware vCenter server API must be retried upon connection related issues. vmware_ca_file = None string value CA bundle file to use in verifying the vCenter server certificate. vmware_cluster_name = None multi valued Name of a vCenter compute cluster where volumes should be created. vmware_connection_pool_size = 10 integer value Maximum number of connections in http connection pool. vmware_datastore_regex = None string value Regular expression pattern to match the name of datastores where backend volumes are created. vmware_host_ip = None string value IP address for connecting to VMware vCenter server. vmware_host_password = None string value Password for authenticating with VMware vCenter server. vmware_host_port = 443 port value Port number for connecting to VMware vCenter server. vmware_host_username = None string value Username for authenticating with VMware vCenter server. vmware_host_version = None string value Optional string specifying the VMware vCenter server version. The driver attempts to retrieve the version from VMware vCenter server. Set this configuration only if you want to override the vCenter server version. vmware_image_transfer_timeout_secs = 7200 integer value Timeout in seconds for VMDK volume transfer between Cinder and Glance. vmware_insecure = False boolean value If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if "vmware_ca_file" is set. vmware_lazy_create = True boolean value If true, the backend volume in vCenter server is created lazily when the volume is created without any source. The backend volume is created when the volume is attached, uploaded to image service or during backup. vmware_max_objects_retrieval = 100 integer value Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. vmware_snapshot_format = template string value Volume snapshot format in vCenter server. vmware_storage_profile = None multi valued Names of storage profiles to be monitored. vmware_task_poll_interval = 2.0 floating point value The interval (in seconds) for polling remote tasks invoked on VMware vCenter server. vmware_tmp_dir = /tmp string value Directory where virtual disks are stored during volume backup and restore. vmware_volume_folder = Volumes string value Name of the vCenter inventory folder that will contain Cinder volumes. This folder will be created under "OpenStack/<project_folder>", where project_folder is of format "Project (<volume_project_id>)". vmware_wsdl_location = None string value Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl . Optional over-ride to default location for bug work-arounds. vnx_async_migrate = True boolean value Always use asynchronous migration during volume cloning and creating from snapshot. As described in configuration doc, async migration has some constraints. Besides using metadata, customers could use this option to disable async migration. Be aware that async_migrate in metadata overrides this option when both are set. By default, the value is True. volume_backend_name = None string value The backend name for a given driver implementation volume_clear = zero string value Method used to wipe old volumes volume_clear_ionice = None string value The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority. volume_clear_size = 0 integer value Size in MiB to wipe at start of old volumes. 1024 MiB at max. 0 ⇒ all volume_copy_blkio_cgroup_name = cinder-volume-copy string value The blkio cgroup name to be used to limit bandwidth of volume copy volume_copy_bps_limit = 0 integer value The upper limit of bandwidth of volume copy. 0 ⇒ unlimited volume_dd_blocksize = 1M string value The default block size used when copying/clearing volumes volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver string value Driver to use for volume creation volume_group = cinder-volumes string value Name for the VG that will contain exported volumes volumes_dir = USDstate_path/volumes string value Volume configuration file storage directory vxflexos_allow_non_padded_volumes = False boolean value Allow volumes to be created in Storage Pools when zero padding is disabled. This option should not be enabled if multiple tenants will utilize volumes from a shared Storage Pool. vxflexos_max_over_subscription_ratio = 10.0 floating point value max_over_subscription_ratio setting for the driver. Maximum value allowed is 10.0. vxflexos_rest_server_port = 443 port value Gateway REST server port. vxflexos_round_volume_capacity = True boolean value Round volume sizes up to 8GB boundaries. VxFlex OS/ScaleIO requires volumes to be sized in multiples of 8GB. If set to False, volume creation will fail for volumes not sized properly vxflexos_server_api_version = None string value VxFlex OS/ScaleIO API version. This value should be left as the default value unless otherwise instructed by technical support. vxflexos_storage_pools = None string value Storage Pools. Comma separated list of storage pools used to provide volumes. Each pool should be specified as a protection_domain_name:storage_pool_name value vxflexos_unmap_volume_before_deletion = False boolean value Unmap volumes before deletion. vzstorage_default_volume_format = raw string value Default format that will be used when creating volumes if no volume format is specified. vzstorage_mount_options = None list value Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details. vzstorage_mount_point_base = USDstate_path/mnt string value Base dir containing mount points for vzstorage shares. vzstorage_shares_config = /etc/cinder/vzstorage_shares string value File with the list of available vzstorage shares. vzstorage_sparsed_volumes = True boolean value Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. vzstorage_used_ratio = 0.95 floating point value Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. windows_iscsi_lun_path = C:\iSCSIVirtualDisks string value Path to store VHD backed volumes xtremio_array_busy_retry_count = 5 integer value Number of retries in case array is busy xtremio_array_busy_retry_interval = 5 integer value Interval between retries in case array is busy xtremio_clean_unused_ig = False boolean value Should the driver remove initiator groups with no volumes after the last connection was terminated. Since the behavior till now was to leave the IG be, we default to False (not deleting IGs without connected volumes); setting this parameter to True will remove any IG after terminating its connection to the last volume. `xtremio_cluster_name = ` string value XMS cluster id in multi-cluster environment xtremio_volumes_per_glance_cache = 100 integer value Number of volumes created from each cached glance image zadara_access_key = None string value VPSA access key zadara_default_snap_policy = False boolean value VPSA - Attach snapshot policy for volumes zadara_password = None string value VPSA - Password zadara_ssl_cert_verify = True boolean value If set to True the http client will validate the SSL certificate of the VPSA endpoint. zadara_use_iser = True boolean value VPSA - Use ISER instead of iSCSI zadara_user = None string value VPSA - Username zadara_vol_encrypt = False boolean value VPSA - Default encryption policy for volumes zadara_vol_name_template = OS_%s string value VPSA - Default template for VPSA volume names zadara_vpsa_host = None string value VPSA - Management Host name or IP address zadara_vpsa_poolname = None string value VPSA - Storage Pool assigned for volumes zadara_vpsa_port = None port value VPSA - Port number zadara_vpsa_use_ssl = False boolean value VPSA - Use SSL connection zfssa_cache_directory = os-cinder-cache string value Name of directory inside zfssa_nfs_share where cache volumes are stored. zfssa_cache_project = os-cinder-cache string value Name of ZFSSA project where cache volumes are stored. zfssa_data_ip = None string value Data path IP address zfssa_enable_local_cache = True boolean value Flag to enable local caching: True, False. zfssa_https_port = 443 string value HTTPS port number `zfssa_initiator = ` string value iSCSI initiator IQNs. (comma separated) `zfssa_initiator_config = ` string value iSCSI initiators configuration. `zfssa_initiator_group = ` string value iSCSI initiator group. `zfssa_initiator_password = ` string value Secret of the iSCSI initiator CHAP user. `zfssa_initiator_user = ` string value iSCSI initiator CHAP user (name). zfssa_lun_compression = off string value Data compression. zfssa_lun_logbias = latency string value Synchronous write bias. zfssa_lun_sparse = False boolean value Flag to enable sparse (thin-provisioned): True, False. zfssa_lun_volblocksize = 8k string value Block size. zfssa_manage_policy = loose string value Driver policy for volume manage. `zfssa_nfs_mount_options = ` string value Options to be passed while mounting share over nfs `zfssa_nfs_pool = ` string value Storage pool name. zfssa_nfs_project = NFSProject string value Project name. zfssa_nfs_share = nfs_share string value Share name. zfssa_nfs_share_compression = off string value Data compression. zfssa_nfs_share_logbias = latency string value Synchronous write bias-latency, throughput. zfssa_pool = None string value Storage pool name. zfssa_project = None string value Project name. `zfssa_replication_ip = ` string value IP address used for replication data. (maybe the same as data ip) zfssa_rest_timeout = None integer value REST connection timeout. (seconds) zfssa_target_group = tgt-grp string value iSCSI target group name. zfssa_target_interfaces = None string value Network interfaces of iSCSI targets. (comma separated) `zfssa_target_password = ` string value Secret of the iSCSI target CHAP user. zfssa_target_portal = None string value iSCSI target portal (Data-IP:Port, w.x.y.z:3260). `zfssa_target_user = ` string value iSCSI target CHAP user (name). 2.1.4. barbican The following table outlines the options available under the [barbican] group in the /etc/cinder/cinder.conf file. Table 2.3. barbican Configuration option = Default value Type Description auth_endpoint = http://localhost/identity/v3 string value Use this endpoint to connect to Keystone barbican_api_version = None string value Version of the Barbican API, for example: "v1" barbican_endpoint = None string value Use this endpoint to connect to Barbican, for example: "http://localhost:9311/" barbican_endpoint_type = public string value Specifies the type of endpoint. Allowed values are: public, private, and admin number_of_retries = 60 integer value Number of times to retry poll for key creation completion retry_delay = 1 integer value Number of seconds to wait before retrying poll for key creation completion verify_ssl = True boolean value Specifies if insecure TLS (https) requests. If False, the server's certificate will not be validated 2.1.5. brcd_fabric_example The following table outlines the options available under the [brcd_fabric_example] group in the /etc/cinder/cinder.conf file. Table 2.4. brcd_fabric_example Configuration option = Default value Type Description `fc_fabric_address = ` string value Management IP of fabric. `fc_fabric_password = ` string value Password for user. fc_fabric_port = 22 port value Connecting port `fc_fabric_ssh_cert_path = ` string value Local SSH certificate Path. `fc_fabric_user = ` string value Fabric user ID. fc_southbound_protocol = REST_HTTP string value South bound connector for the fabric. fc_virtual_fabric_id = None string value Virtual Fabric ID. zone_activate = True boolean value Overridden zoning activation state. zone_name_prefix = openstack string value Overridden zone name prefix. zoning_policy = initiator-target string value Overridden zoning policy. 2.1.6. cisco_fabric_example The following table outlines the options available under the [cisco_fabric_example] group in the /etc/cinder/cinder.conf file. Table 2.5. cisco_fabric_example Configuration option = Default value Type Description `cisco_fc_fabric_address = ` string value Management IP of fabric `cisco_fc_fabric_password = ` string value Password for user cisco_fc_fabric_port = 22 port value Connecting port `cisco_fc_fabric_user = ` string value Fabric user ID cisco_zone_activate = True boolean value overridden zoning activation state cisco_zone_name_prefix = None string value overridden zone name prefix cisco_zoning_policy = initiator-target string value overridden zoning policy cisco_zoning_vsan = None string value VSAN of the Fabric 2.1.7. coordination The following table outlines the options available under the [coordination] group in the /etc/cinder/cinder.conf file. Table 2.6. coordination Configuration option = Default value Type Description backend_url = file://USDstate_path string value The backend URL to use for distributed coordination. 2.1.8. cors The following table outlines the options available under the [cors] group in the /etc/cinder/cinder.conf file. Table 2.7. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID', 'X-Trace-Info', 'X-Trace-HMAC', 'OpenStack-API-Version'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH', 'HEAD'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID', 'OpenStack-API-Version'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 2.1.9. database The following table outlines the options available under the [database] group in the /etc/cinder/cinder.conf file. Table 2.8. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 2.1.10. fc-zone-manager The following table outlines the options available under the [fc-zone-manager] group in the /etc/cinder/cinder.conf file. Table 2.9. fc-zone-manager Configuration option = Default value Type Description brcd_sb_connector = HTTP string value South bound connector for zoning operation cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI string value Southbound connector for zoning operation enable_unsupported_driver = False boolean value Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the release. fc_fabric_names = None string value Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService string value FC SAN Lookup Service zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver string value FC Zone Driver responsible for zone management zoning_policy = initiator-target string value Zoning policy configured by user; valid values include "initiator-target" or "initiator" 2.1.11. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/cinder/cinder.conf file. Table 2.10. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 2.1.12. key_manager The following table outlines the options available under the [key_manager] group in the /etc/cinder/cinder.conf file. Table 2.11. key_manager Configuration option = Default value Type Description auth_type = None string value The type of authentication credential to create. Possible values are token , password , keystone_token , and keystone_password . Required if no context is passed to the credential factory. auth_url = None string value Use this endpoint to connect to Keystone. backend = barbican string value Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time. domain_id = None string value Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type. domain_name = None string value Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type. fixed_key = None string value Fixed key returned by key manager, specified in hex password = None string value Password for authentication. Required for password and keystone_password auth_type. project_domain_id = None string value Project's domain ID for project. Optional for keystone_token and keystone_password auth_type. project_domain_name = None string value Project's domain name for project. Optional for keystone_token and keystone_password auth_type. project_id = None string value Project ID for project scoping. Optional for keystone_token and keystone_password auth_type. project_name = None string value Project name for project scoping. Optional for keystone_token and keystone_password auth_type. reauthenticate = True boolean value Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type. token = None string value Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory. trust_id = None string value Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type. user_domain_id = None string value User's domain ID for authentication. Optional for keystone_token and keystone_password auth_type. user_domain_name = None string value User's domain name for authentication. Optional for keystone_token and keystone_password auth_type. user_id = None string value User ID for authentication. Optional for keystone_token and keystone_password auth_type. username = None string value Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type. 2.1.13. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/cinder/cinder.conf file. Table 2.12. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 2.1.14. nova The following table outlines the options available under the [nova] group in the /etc/cinder/cinder.conf file. Table 2.13. nova Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. insecure = False boolean value Verify HTTPS connections. interface = public string value Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. keyfile = None string value PEM encoded client certificate key file region_name = None string value Name of nova region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. timeout = None integer value Timeout value for http requests token_auth_url = None string value The authentication URL for the nova connection when using the current users token 2.1.15. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/cinder/cinder.conf file. Table 2.14. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 2.1.16. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/cinder/cinder.conf file. Table 2.15. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 2.1.17. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/cinder/cinder.conf file. Table 2.16. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 2.1.18. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/cinder/cinder.conf file. Table 2.17. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 2.1.19. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/cinder/cinder.conf file. Table 2.18. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True integer value Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply,so the MessageUndeliverable exception is raised in case the client queue does not exist. heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 2.1.20. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/cinder/cinder.conf file. Table 2.19. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 2.1.21. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/cinder/cinder.conf file. Table 2.20. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 2.1.22. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/cinder/cinder.conf file. Table 2.21. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 2.1.23. oslo_versionedobjects The following table outlines the options available under the [oslo_versionedobjects] group in the /etc/cinder/cinder.conf file. Table 2.22. oslo_versionedobjects Configuration option = Default value Type Description fatal_exception_format_errors = False boolean value Make exception message format errors fatal 2.1.24. privsep The following table outlines the options available under the [privsep] group in the /etc/cinder/cinder.conf file. Table 2.23. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 2.1.25. profiler The following table outlines the options available under the [profiler] group in the /etc/cinder/cinder.conf file. Table 2.24. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 2.1.26. sample_castellan_source The following table outlines the options available under the [sample_castellan_source] group in the /etc/cinder/cinder.conf file. Table 2.25. sample_castellan_source Configuration option = Default value Type Description config_file = None string value The path to a castellan configuration file. driver = None string value The name of the driver that can load this configuration source. mapping_file = None string value The path to a configuration/castellan_id mapping file. 2.1.27. sample_remote_file_source The following table outlines the options available under the [sample_remote_file_source] group in the /etc/cinder/cinder.conf file. Table 2.26. sample_remote_file_source Configuration option = Default value Type Description ca_path = None string value The path to a CA_BUNDLE file or directory with certificates of trusted CAs. client_cert = None string value Client side certificate, as a single file path containing either the certificate only or the private key and the certificate. client_key = None string value Client side private key, in case client_cert is specified but does not includes the private key. driver = None string value The name of the driver that can load this configuration source. uri = None uri value Required option with the URI of the extra configuration file's location. 2.1.28. service_user The following table outlines the options available under the [service_user] group in the /etc/cinder/cinder.conf file. Table 2.27. service_user Configuration option = Default value Type Description auth-url = None string value Authentication URL cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to send_service_user_token = False boolean value When True, if sending a user token to an REST API, also send a service token. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 2.1.29. ssl The following table outlines the options available under the [ssl] group in the /etc/cinder/cinder.conf file. Table 2.28. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 2.1.30. vault The following table outlines the options available under the [vault] group in the /etc/cinder/cinder.conf file. Table 2.29. vault Configuration option = Default value Type Description approle_role_id = None string value AppRole role_id for authentication with vault approle_secret_id = None string value AppRole secret_id for authentication with vault kv_mountpoint = secret string value Mountpoint of KV store in Vault to use, for example: secret root_token_id = None string value root token for vault ssl_ca_crt_file = None string value Absolute path to ca cert file use_ssl = False boolean value SSL Enabled/Disabled vault_url = http://127.0.0.1:8200 string value Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/configuration_reference/cinder |
Chapter 13. Using Cruise Control for cluster rebalancing | Chapter 13. Using Cruise Control for cluster rebalancing Cruise Control is an open source system for automating Kafka operations, such as monitoring cluster workload, rebalancing a cluster based on predefined constraints, and detecting and fixing anomalies. It consists of four main components- the Load Monitor, the Analyzer, the Anomaly Detector, and the Executor- and a REST API for client interactions. You can use Cruise Control to rebalance a Kafka cluster. Cruise Control for Streams for Apache Kafka on Red Hat Enterprise Linux is provided as a separate zipped distribution. Streams for Apache Kafka utilizes the REST API to support the following Cruise Control features: Generating optimization proposals from optimization goals. Rebalancing a Kafka cluster based on an optimization proposal. Optimization goals An optimization goal describes a specific objective to achieve from a rebalance. For example, a goal might be to distribute topic replicas across brokers more evenly. You can change what goals to include through configuration. A goal is defined as a hard goal or soft goal. You can add hard goals through Cruise Control deployment configuration. You also have main, default, and user-provided goals that fit into each of these categories. Hard goals are preset and must be satisfied for an optimization proposal to be successful. Soft goals do not need to be satisfied for an optimization proposal to be successful. They can be set aside if it means that all hard goals are met. Main goals are inherited from Cruise Control. Some are preset as hard goals. Main goals are used in optimization proposals by default. Default goals are the same as the main goals by default. You can specify your own set of default goals. User-provided goals are a subset of default goals that are configured for generating a specific optimization proposal. Optimization proposals Optimization proposals comprise the goals you want to achieve from a rebalance. You generate an optimization proposal to create a summary of proposed changes and the results that are possible with the rebalance. The goals are assessed in a specific order of priority. You can then choose to approve or reject the proposal. You can reject the proposal to run it again with an adjusted set of goals. You can generate and approve an optimization proposal by making a request to one of the following API endpoints. /rebalance endpoint to run a full rebalance. /add_broker endpoint to rebalance after adding brokers when scaling up a Kafka cluster. /remove_broker endpoint to rebalance before removing brokers when scaling down a Kafka cluster. You configure optimization goals through a configuration properties file. Streams for Apache Kafka provides example properties files for Cruise Control. 13.1. Cruise Control components and features Cruise Control consists of four main components- the Load Monitor, the Analyzer, the Anomaly Detector, and the Executor- and a REST API for client interactions. Streams for Apache Kafka utilizes the REST API to support the following Cruise Control features: Generating optimization proposals from optimization goals. Rebalancing a Kafka cluster based on an optimization proposal. Optimization goals An optimization goal describes a specific objective to achieve from a rebalance. For example, a goal might be to distribute topic replicas across brokers more evenly. You can change what goals to include through configuration. A goal is defined as a hard goal or soft goal. You can add hard goals through Cruise Control deployment configuration. You also have main, default, and user-provided goals that fit into each of these categories. Hard goals are preset and must be satisfied for an optimization proposal to be successful. Soft goals do not need to be satisfied for an optimization proposal to be successful. They can be set aside if it means that all hard goals are met. Main goals are inherited from Cruise Control. Some are preset as hard goals. Main goals are used in optimization proposals by default. Default goals are the same as the main goals by default. You can specify your own set of default goals. User-provided goals are a subset of default goals that are configured for generating a specific optimization proposal. Optimization proposals Optimization proposals comprise the goals you want to achieve from a rebalance. You generate an optimization proposal to create a summary of proposed changes and the results that are possible with the rebalance. The goals are assessed in a specific order of priority. You can then choose to approve or reject the proposal. You can reject the proposal to run it again with an adjusted set of goals. You can generate an optimization proposal in one of three modes. full is the default mode and runs a full rebalance. add-brokers is the mode you use after adding brokers when scaling up a Kafka cluster. remove-brokers is the mode you use before removing brokers when scaling down a Kafka cluster. Other Cruise Control features are not currently supported, including self healing, notifications, write-your-own goals, and changing the topic replication factor. Additional resources Cruise Control documentation 13.2. Downloading Cruise Control A ZIP file distribution of Cruise Control is available for download from the Red Hat website. You can download the latest version of Red Hat Streams for Apache Kafka from the Streams for Apache Kafka software downloads page . Procedure Download the latest version of the Red Hat Streams for Apache Kafka Cruise Control archive from the Red Hat Customer Portal . Create the /opt/cruise-control directory: sudo mkdir /opt/cruise-control Extract the contents of the Cruise Control ZIP file to the new directory: unzip amq-streams-<version>-cruise-control-bin.zip -d /opt/cruise-control Change the ownership of the /opt/cruise-control directory to the kafka user: sudo chown -R kafka:kafka /opt/cruise-control 13.3. Deploying the Cruise Control Metrics Reporter Before starting Cruise Control, you must configure the Kafka brokers to use the provided Cruise Control Metrics Reporter. The file for the Metrics Reporter is supplied with the Streams for Apache Kafka installation artifacts. When loaded at runtime, the Metrics Reporter sends metrics to the __CruiseControlMetrics topic, one of three auto-created topics . Cruise Control uses these metrics to create and update the workload model and to calculate optimization proposals. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. You are logged in to Red Hat Enterprise Linux as the kafka user. Procedure For each broker in the Kafka cluster and one at a time: Stop the Kafka broker: /opt/kafka/bin/kafka-server-stop.sh Edit the Kafka configuration properties file to configure the Cruise Control Metrics Reporter. Add the CruiseControlMetricsReporter class to the metric.reporters configuration option. Do not remove any existing Metrics Reporters. Add the following configuration options and values: These options enable the Cruise Control Metrics Reporter to create the __CruiseControlMetrics topic with a log cleanup policy of DELETE . For more information, see Auto-created topics and Log cleanup policy for Cruise Control Metrics topic . Configure SSL, if required. In the Kafka configuration properties file, configure SSL between the Cruise Control Metrics Reporter and the Kafka broker by setting the relevant client configuration properties. The Metrics Reporter accepts all standard producer-specific configuration properties with the cruise.control.metrics.reporter prefix. For example: cruise.control.metrics.reporter.ssl.truststore.password . In the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ) configure SSL between the Kafka broker and the Cruise Control server by setting the relevant client configuration properties. Cruise Control inherits SSL client property options from Kafka and uses those properties for all Cruise Control server clients. Restart the Kafka broker: /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Repeat steps 1-5 for the remaining brokers. 13.4. Configuring and starting Cruise Control Configure the properties used by Cruise Control and then start the Cruise Control server using the kafka-cruise-control-start.sh script. The server is hosted on a single machine for the whole Kafka cluster. Three topics are auto-created when Cruise Control starts. For more information, see Auto-created topics . Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. You have downloaded Cruise Control . You have deployed the Cruise Control Metrics Reporter . Procedure Edit the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ). Configure the properties shown in the following example configuration: # The Kafka cluster to control. bootstrap.servers=localhost:9092 1 # The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 # The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 # The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 # The list of supported goals goals={list of main optimization goals} 5 # The list of supported hard goals hard.goals={List of hard goals} 6 # How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 # The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8 1 Host and port numbers of the Kafka broker (always port 9092). 2 Replication factor of the Kafka metric sample store topic. If you are evaluating Cruise Control in a single-node Kafka and ZooKeeper cluster, set this property to 1. For production use, set this property to 2 or more. 3 The configuration file that sets the maximum capacity limits for broker resources. Use the file that applies to your Kafka deployment configuration. For more information, see Capacity configuration . 4 Comma-separated list of default optimization goals, using fully-qualified domain names (FQDNs). A number of main optimization goals (see 5) are already set as default optimization goals; you can add or remove goals if desired. For more information, see Section 13.5, "Optimization goals overview" . 5 Comma-separated list of main optimization goals, using FQDNs. To completely exclude goals from being used to generate optimization proposals, remove them from the list. For more information, see Section 13.5, "Optimization goals overview" . 6 Comma-separated list of hard goals, using FQDNs. Seven of the main optimization goals are already set as hard goals; you can add or remove goals if desired. For more information, see Section 13.5, "Optimization goals overview" . 7 The interval, in milliseconds, for refreshing the cached optimization proposal that is generated from the default optimization goals. For more information, see Section 13.6, "Optimization proposals overview" . 8 Host and port numbers of the ZooKeeper connection (always port 2181). Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number> To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state' Auto-created topics The following table shows the three topics that are automatically created when Cruise Control starts. These topics are required for Cruise Control to work properly and must not be deleted or changed. Table 13.1. Auto-created topics Auto-created topic Created by Function __CruiseControlMetrics Cruise Control Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. __KafkaCruiseControlPartitionMetricSamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . __KafkaCruiseControlModelTrainingSamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To ensure that log compaction is disabled in the auto-created topics, make sure that you configure the Cruise Control Metrics Reporter as described in Section 13.3, "Deploying the Cruise Control Metrics Reporter" . Log compaction can remove records that are needed by Cruise Control and prevent it from working properly. Additional resources Log cleanup policy for Cruise Control Metrics topic 13.5. Optimization goals overview Optimization goals are constraints on workload redistribution and resource utilization across a Kafka cluster. To rebalance a Kafka cluster, Cruise Control uses optimization goals to generate optimization proposals . 13.5.1. Goals order of priority Streams for Apache Kafka on Red Hat Enterprise Linux supports all the optimization goals developed in the Cruise Control project. The supported goals, in the default descending order of priority, are as follows: Rack-awareness Minimum number of leader replicas per broker for a set of topics Replica capacity Capacity: Disk capacity, Network inbound capacity, Network outbound capacity CPU capacity Replica distribution Potential network output Resource distribution: Disk utilization distribution, Network inbound utilization distribution, Network outbound utilization distribution Leader bytes-in rate distribution Topic replica distribution CPU usage distribution Leader replica distribution Preferred leader election Kafka Assigner disk usage distribution Intra-broker disk capacity Intra-broker disk usage For more information on each optimization goal, see Goals in the Cruise Control Wiki . 13.5.2. Goals configuration in the Cruise Control properties file You configure optimization goals in the cruisecontrol.properties file in the cruise-control/config/ directory. Cruise Control has configurations for hard optimization goals that must be satisfied, as well as main, default, and user-provided optimization goals. You can specify the following types of optimization goal in the following configuration: Main goals - cruisecontrol.properties file Hard goals - cruisecontrol.properties file Default goals - cruisecontrol.properties file User-provided goals - runtime parameters Optionally, user-provided optimization goals are set at runtime as parameters in requests to the /rebalance endpoint. Optimization goals are subject to any capacity limits on broker resources. 13.5.3. Hard and soft optimization goals Hard goals are goals that must be satisfied in optimization proposals. Goals that are not configured as hard goals are known as soft goals . You can think of soft goals as best effort goals: they do not need to be satisfied in optimization proposals, but are included in optimization calculations. Cruise Control will calculate optimization proposals that satisfy all the hard goals and as many soft goals as possible (in their priority order). An optimization proposal that does not satisfy all the hard goals is rejected by the Analyzer and is not sent to the user. Note For example, you might have a soft goal to distribute a topic's replicas evenly across the cluster (the topic replica distribution goal). Cruise Control will ignore this goal if doing so enables all the configured hard goals to be met. In Cruise Control, the following main optimization goals are preset as hard goals: To change the hard goals, edit the hard.goals property of the cruisecontrol.properties file and specify the goals using their fully-qualified domain names. Increasing the number of hard goals reduces the likelihood that Cruise Control will calculate and generate valid optimization proposals. 13.5.4. Main optimization goals The main optimization goals are available to all users. Goals that are not listed in the main optimization goals are not available for use in Cruise Control operations. The following main optimization goals are preset in the goals property of the cruisecontrol.properties file in descending priority order: To reduce complexity, we recommend that you do not change the preset main optimization goals, unless you need to completely exclude one or more goals from being used to generate optimization proposals. The priority order of the main optimization goals can be modified, if desired, in the configuration for default optimization goals. To modify the preset main optimization goals, specify a list of goals in the goals property in descending priority order. Use fully-qualified domain names as shown in the cruisecontrol.properties file. You must specify at least one main goal, or Cruise Control will crash. Note If you change the preset main optimization goals, you must ensure that the configured hard.goals are a subset of the main optimization goals that you configured. Otherwise, errors will occur when generating optimization proposals. 13.5.5. Default optimization goals Cruise Control uses the default optimization goals list to generate the cached optimization proposal . For more information, see Section 13.6, "Optimization proposals overview" . You can override the default optimization goals at runtime by setting user-provided optimization goals . The following default optimization goals are preset in the default.goals property of the cruisecontrol.properties file in descending priority order: You must specify at least one default goal, or Cruise Control will crash. To modify the default optimization goals, specify a list of goals in the default.goals property in descending priority order. Default goals must be a subset of the main optimization goals; use fully-qualified domain names. 13.5.6. User-provided optimization goals User-provided optimization goals narrow down the configured default goals for a particular optimization proposal. You can set them, as required, as parameters in HTTP requests to the /rebalance endpoint. For more information, see Section 13.9, "Generating optimization proposals" . User-provided optimization goals can generate optimization proposals for different scenarios. For example, you might want to optimize leader replica distribution across the Kafka cluster without considering disk capacity or disk utilization. So, you send a request to the /rebalance endpoint containing a single goal for leader replica distribution. User-provided optimization goals must: Include all configured hard goals , or an error occurs Be a subset of the main optimization goals To ignore the configured hard goals in an optimization proposal, add the skip_hard_goals_check=true parameter to the request. Additional resources Cruise Control configuration Configurations in the Cruise Control Wiki 13.6. Optimization proposals overview An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. Each optimization proposal is based on the set of optimization goals that was used to generate it, subject to any configured capacity limits on broker resources. All optimization proposals are estimates of the impact of a proposed rebalance. You can approve or reject a proposal. You cannot approve a cluster rebalance without first generating the optimization proposal. You can run the optimization proposal using one of the following endpoints: /rebalance /add_broker /remove_broker 13.6.1. Rebalancing endpoints You specify a rebalancing endpoint when you send a POST request to generate an optimization proposal. /rebalance The /rebalance endpoint runs a full rebalance by moving replicas across all the brokers in the cluster. /add_broker The add_broker endpoint is used after scaling up a Kafka cluster by adding one or more brokers. Normally, after scaling up a Kafka cluster, new brokers are used to host only the partitions of newly created topics. If no new topics are created, the newly added brokers are not used and the existing brokers remain under the same load. By using the add_broker endpoint immediately after adding brokers to the cluster, the rebalancing operation moves replicas from existing brokers to the newly added brokers. You specify the new brokers as a brokerid list in the POST request. /remove_broker The /remove_broker endpoint is used before scaling down a Kafka cluster by removing one or more brokers. If you scale down a Kafka cluster, brokers are shut down even if they host replicas. This can lead to under-replicated partitions and possibly result in some partitions being under their minimum ISR (in-sync replicas). To avoid this potential problem, the /remove_broker endpoint moves replicas off the brokers that are going to be removed. When these brokers are not hosting replicas anymore, you can safely run the scaling down operation. You specify the brokers you're removing as a brokerid list in the POST request. In general, use the /rebalance endpoint to rebalance a Kafka cluster by spreading the load across brokers. Use the /add-broker endpoint and /remove_broker endpoint only if you want to scale your cluster up or down and rebalance the replicas accordingly. The procedure to run a rebalance is actually the same across the three different endpoints. The only difference is with listing brokers that have been added or will be removed to the request. 13.6.2. Approving or rejecting an optimization proposal An optimization proposal summary shows the proposed scope of changes. The summary is returned in a response to a HTTP request through the Cruise Control API. When you make a POST request to the /rebalance endpoint, an optimization proposal summary is returned in the response. Returning an optimization proposal summary curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance' Use the summary to decide whether to approve or reject an optimization proposal. Approving an optimization proposal You approve the optimization proposal by making a POST request to the /rebalance endpoint and setting the dryrun parameter to false (default true ). Cruise Control applies the proposal to the Kafka cluster and starts a cluster rebalance operation. Rejecting an optimization proposal If you choose not to approve an optimization proposal, you can change the optimization goals or update any of the rebalance performance tuning options , and then generate another proposal. You can resend a request without the dryrun parameter to generate a new optimization proposal. Use the optimization proposal to assess the movements required for a rebalance. For example, a summary describes inter-broker and intra-broker movements. Inter-broker rebalancing moves data between separate brokers. Intra-broker rebalancing moves data between disks on the same broker when you are using a JBOD storage configuration. Such information can be useful even if you don't go ahead and approve the proposal. You might reject an optimization proposal, or delay its approval, because of the additional load on a Kafka cluster when rebalancing. In the following example, the proposal suggests the rebalancing of data between separate brokers. The rebalance involves the movement of 55 partition replicas, totaling 12MB of data, across the brokers. Though the inter-broker movement of partition replicas has a high impact on performance, the total amount of data is not large. If the total data was much larger, you could reject the proposal, or time when to approve the rebalance to limit the impact on the performance of the Kafka cluster. Rebalance performance tuning options can help reduce the impact of data movement. If you can extend the rebalance period, you can divide the rebalance into smaller batches. Fewer data movements at a single time reduces the load on the cluster. Example optimization proposal summary Optimization has 55 inter-broker replica (12 MB) moves, 0 intra-broker replica (0 MB) moves and 24 leadership moves with a cluster model of 5 recent windows and 100.000% of the partitions covered. Excluded Topics: []. Excluded Brokers For Leadership: []. Excluded Brokers For Replica Move: []. Counts: 3 brokers 343 replicas 7 topics. On-demand Balancedness Score Before (78.012) After (82.912). Provision Status: RIGHT_SIZED. The proposal will also move 24 partition leaders to different brokers, which has a low impact on performance. The balancedness scores are measurements of the overall balance of the Kafka Cluster before and after the optimization proposal is approved. A balancedness score is based on optimization goals. If all goals are satisfied, the score is 100. The score is reduced for each goal that will not be met. Compare the balancedness scores to see whether the Kafka cluster is less balanced than it could be following a rebalance. The provision status indicates whether the current cluster configuration supports the optimization goals. Check the provision status to see if you should add or remove brokers. Table 13.2. Optimization proposal provision status Status Description RIGHT_SIZED The cluster has an appropriate number of brokers to satisfy the optimization goals. UNDER_PROVISIONED The cluster is under-provisioned and requires more brokers to satisfy the optimization goals. OVER_PROVISIONED The cluster is over-provisioned and requires fewer brokers to satisfy the optimization goals. UNDECIDED The status is not relevant or it has not yet been decided. 13.6.3. Optimization proposal summary properties The following table describes the properties contained in an optimization proposal. Table 13.3. Properties contained in an optimization proposal summary Property Description n inter-broker replica (y MB) moves n : The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. y MB : The sum of the size of each partition replica that will be moved to a separate broker. Performance impact during rebalance operation : Variable. The larger the number of MBs, the longer the cluster rebalance will take to complete. n intra-broker replica (y MB) moves n : The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but less than inter-broker replica moves . y MB : The sum of the size of each partition replica that will be moved between disks on the same broker. Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see inter-broker replica moves ). n excluded topics The number of topics excluded from the calculation of partition replica/leader movements in the optimization proposal. You can exclude topics in one of the following ways: In the cruisecontrol.properties file, specify a regular expression in the topics.excluded.from.partition.movement property. In a POST request to the /rebalance endpoint, specify a regular expression in the excluded_topics parameter. Topics that match the regular expression are listed in the response and will be excluded from the cluster rebalance. n leadership moves n : The number of partitions whose leaders will be switched to different replicas. Performance impact during rebalance operation : Relatively low. n recent windows n : The number of metrics windows upon which the optimization proposal is based. n% of the partitions covered n% : The percentage of partitions in the Kafka cluster covered by the optimization proposal. On-demand Balancedness Score Before (nn.yyy) After (nn.yyy) Measurements of the overall balance of a Kafka Cluster. Cruise Control assigns a Balancedness Score to every optimization goal based on several factors, including priority (the goal's position in the list of default.goals or user-provided goals). The On-demand Balancedness Score is calculated by subtracting the sum of the Balancedness Score of each violated soft goal from 100. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. 13.6.4. Cached optimization proposal Cruise Control maintains a cached optimization proposal based on the configured default optimization goals . Generated from the workload model, the cached optimization proposal is updated every 15 minutes to reflect the current state of the Kafka cluster. The most recent cached optimization proposal is returned when the following goal configurations are used: The default optimization goals User-provided optimization goals that can be met by the current cached proposal To change the cached optimization proposal refresh interval, edit the proposal.expiration.ms setting in the cruisecontrol.properties file. Consider a shorter interval for fast changing clusters, although this increases the load on the Cruise Control server. Additional resources Optimization goals overview Generating optimization proposals Initiating a cluster rebalance 13.7. Rebalance performance tuning overview You can adjust several performance tuning options for cluster rebalances. These options control how partition replicas and leadership movements in a rebalance are executed, as well as the bandwidth that is allocated to a rebalance operation. Partition reassignment commands Optimization proposals are composed of separate partition reassignment commands. When you initiate a proposal, the Cruise Control server applies these commands to the Kafka cluster. A partition reassignment command consists of either of the following types of operations: Partition movement : Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement : Involves switching the leader of the partition's replicas. Cruise Control issues partition reassignment commands to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number of each type of movement contained in each batch. To configure partition reassignment commands, see Rebalance tuning options . Replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which applies the commands in the order in which they were generated. However, if there are some very large partition reassignments early in the proposal, this strategy can slow down the application of the other reassignments. Cruise Control provides three alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Order reassignments in ascending size. PrioritizeLargeReplicaMovementStrategy : Order reassignments in descending size. PostponeUrpReplicaMovementStrategy : Prioritize reassignments for replicas of partitions which have no out-of-sync replicas. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. To configure replica movement strategies, see Rebalance tuning options . Rebalance tuning options Cruise Control provides several configuration options for tuning rebalance parameters. These options are set in the following ways: As properties, in the default Cruise Control configuration, in the cruisecontrol.properties file As parameters in POST requests to the /rebalance endpoint The relevant configurations for both methods are summarized in the following table. Table 13.4. Rebalance performance tuning configuration Cruise Control properties KafkaRebalance parameters Default Description num.concurrent.partition.movements.per.broker concurrent_partition_movements_per_broker 5 The maximum number of inter-broker partition movements in each partition reassignment batch num.concurrent.intra.broker.partition.movements concurrent_intra_broker_partition_movements 2 The maximum number of intra-broker partition movements in each partition reassignment batch num.concurrent.leader.movements concurrent_leader_movements 1000 The maximum number of partition leadership changes in each partition reassignment batch default.replication.throttle replication_throttle Null (no limit) The bandwidth (in bytes per second) to assign to partition reassignment default.replica.movement.strategies replica_movement_strategies BaseReplicaMovementStrategy The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. There are three strategies: PrioritizeSmallReplicaMovementStrategy , PrioritizeLargeReplicaMovementStrategy , and PostponeUrpReplicaMovementStrategy . For the server setting, use a comma-separated list with the fully qualified names of the strategy class (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the rebalance parameters, use a comma-separated list of the class names of the replica movement strategies. Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources Configurations in the Cruise Control Wiki REST APIs in the Cruise Control Wiki 13.8. Cruise Control configuration The config/cruisecontrol.properties file contains the configuration for Cruise Control. The file consists of properties in one of the following types: String Number Boolean You can specify and configure all the properties listed in the Configurations section of the Cruise Control Wiki. Capacity configuration Cruise Control uses capacity limits to determine if certain resource-based optimization goals are being broken. An attempted optimization fails if one or more of these resource-based goals is set as a hard goal and then broken. This prevents the optimization from being used to generate an optimization proposal. You specify capacity limits for Kafka broker resources in one of the following three .json files in cruise-control/config : capacityJBOD.json : For use in JBOD Kafka deployments (the default file). capacity.json : For use in non-JBOD Kafka deployments where each broker has the same number of CPU cores. capacityCores.json : For use in non-JBOD Kafka deployments where each broker has varying numbers of CPU cores. Set the file in the capacity.config.file property in cruisecontrol.properties . The selected file will be used for broker capacity resolution. For example: Capacity limits can be set for the following broker resources in the described units: DISK : Disk storage in MB CPU : CPU utilization as a percentage (0-100) or as a number of cores NW_IN : Inbound network throughput in KB per second NW_OUT : Outbound network throughput in KB per second To apply the same capacity limits to every broker monitored by Cruise Control, set capacity limits for broker ID -1 . To set different capacity limits for individual brokers, specify each broker ID and its capacity configuration. Example capacity limits configuration { "brokerCapacities":[ { "brokerId": "-1", "capacity": { "DISK": "100000", "CPU": "100", "NW_IN": "10000", "NW_OUT": "10000" }, "doc": "This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB." }, { "brokerId": "0", "capacity": { "DISK": "500000", "CPU": "100", "NW_IN": "50000", "NW_OUT": "50000" }, "doc": "This overrides the capacity for broker 0." } ] } For more information, see Populating the Capacity Configuration File in the Cruise Control Wiki. Log cleanup policy for Cruise Control Metrics topic It is important that the auto-created __CruiseControlMetrics topic (see auto-created topics ) has a log cleanup policy of DELETE rather than COMPACT . Otherwise, records that are needed by Cruise Control might be removed. As described in Section 13.3, "Deploying the Cruise Control Metrics Reporter" , setting the following options in the Kafka configuration file ensures that the COMPACT log cleanup policy is correctly set: cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1 If topic auto-creation is disabled in the Cruise Control Metrics Reporter ( cruise.control.metrics.topic.auto.create=false ), but enabled in the Kafka cluster, then the __CruiseControlMetrics topic is still automatically created by the broker. In this case, you must change the log cleanup policy of the __CruiseControlMetrics topic to DELETE using the kafka-configs.sh tool. Get the current configuration of the __CruiseControlMetrics topic: opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --describe Change the log cleanup policy in the topic configuration: /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete If topic auto-creation is disabled in both the Cruise Control Metrics Reporter and the Kafka cluster, you must create the __CruiseControlMetrics topic manually and then configure it to use the DELETE log cleanup policy using the kafka-configs.sh tool. For more information, see Section 7.9, "Modifying a topic configuration" . Logging configuration Cruise Control uses log4j1 for all server logging. To change the default configuration, edit the log4j.properties file in /opt/cruise-control/config/log4j.properties . You must restart the Cruise Control server before the changes take effect. 13.9. Generating optimization proposals When you make a POST request to the /rebalance endpoint, Cruise Control generates an optimization proposal to rebalance the Kafka cluster based on the optimization goals provided. You can use the results of the optimization proposal to rebalance your Kafka cluster. You can run the optimization proposal using one of the following endpoints: /rebalance /add_broker /remove_broker The endpoint you use depends on whether you are rebalancing across all the brokers already running in the Kafka cluster; or you want to rebalance after scaling up or before scaling down your Kafka cluster. For more information, see Rebalancing endpoints with broker scaling . The optimization proposal is generated as a dry run , unless the dryrun parameter is supplied and set to false . In "dry run mode", Cruise Control generates the optimization proposal and the estimated result, but doesn't initiate the proposal by rebalancing the cluster. You can analyze the information returned in the optimization proposal and decide whether to approve it. Use the following parameters to make requests to the endpoints: dryrun type: boolean, default: true Informs Cruise Control whether you want to generate an optimization proposal only ( true ), or generate an optimization proposal and perform a cluster rebalance ( false ). When dryrun=true (the default), you can also pass the verbose parameter to return more detailed information about the state of the Kafka cluster. This includes metrics for the load on each Kafka broker before and after the optimization proposal is applied, and the differences between the before and after values. excluded_topics type: regex A regular expression that matches the topics to exclude from the calculation of the optimization proposal. goals type: list of strings, default: the configured default.goals list List of user-provided optimization goals to use to prepare the optimization proposal. If goals are not supplied, the configured default.goals list in the cruisecontrol.properties file is used. skip_hard_goals_check type: boolean, default: false By default, Cruise Control checks that the user-provided optimization goals (in the goals parameter) contain all the configured hard goals (in hard.goals ). A request fails if you supply goals that are not a subset of the configured hard.goals . Set skip_hard_goals_check to true if you want to generate an optimization proposal with user-provided optimization goals that do not include all the configured hard.goals . json type: boolean, default: false Controls the type of response returned by the Cruise Control server. If not supplied, or set to false , then Cruise Control returns text formatted for display on the command line. If you want to extract elements of the returned information programmatically, set json=true . This will return JSON formatted text that can be piped to tools such as jq , or parsed in scripts and programs. verbose type: boolean, default: false Controls the level of detail in responses that are returned by the Cruise Control server. Can be used with dryrun=true . Note Other parameters are available. For more information, see REST APIs in the Cruise Control Wiki. Prerequisites Kafka is running. You have configured Cruise Control . (Optional for scaling up) You have installed new brokers on hosts to include in the rebalance. Procedure Generate an optimization proposal using a POST request to the /rebalance , /add_broker , or /remove_broker endpoint. Example request to /rebalance using default goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance' The cached optimization proposal is immediately returned. Note If NotEnoughValidWindows is returned, Cruise Control has not yet recorded enough metrics data to generate an optimization proposal. Wait a few minutes and then resend the request. Example request to /rebalance using specified goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal' If the request satisfies the supplied goals, the cached optimization proposal is immediately returned. Otherwise, a new optimization proposal is generated using the supplied goals; this takes longer to calculate. You can enforce this behavior by adding the ignore_proposal_cache=true parameter to the request. Example request to /rebalance using specified goals without hard goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true' Example request to /add_broker that includes specified brokers curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?brokerid=3,4' The request includes the IDs of the new brokers only. For example, this request adds brokers with the IDs 3 and 4 . Replicas are moved to the new brokers from existing brokers when rebalancing. Example request to /remove_broker that excludes specified brokers curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?brokerid=3,4' The request includes the IDs of the brokers being excluded only. For example, this request excludes brokers with the IDs 3 and 4 . Replicas are moved from the brokers being removed to other existing brokers when rebalancing. Note If a broker that is being removed has excluded topics, replicas are still moved. Review the optimization proposal contained in the response. The properties describe the pending cluster rebalance operation. The proposal contains a high level summary of the proposed optimization, followed by summaries for each default optimization goal, and the expected cluster state after the proposal has executed. Pay particular attention to the following information: The Cluster load after rebalance summary. If it meets your requirements, you should assess the impact of the proposed changes using the high level summary. n inter-broker replica (y MB) moves indicates how much data will be moved across the network between brokers. The higher the value, the greater the potential performance impact on the Kafka cluster during the rebalance. n intra-broker replica (y MB) moves indicates how much data will be moved within the brokers themselves (between disks). The higher the value, the greater the potential performance impact on individual brokers (although less than that of n inter-broker replica (y MB) moves ). The number of leadership moves. This has a negligible impact on the performance of the cluster during the rebalance. Asynchronous responses The Cruise Control REST API endpoints timeout after 10 seconds by default, although proposal generation continues on the server. A timeout might occur if the most recent cached optimization proposal is not ready, or if user-provided optimization goals were specified with ignore_proposal_cache=true . To allow you to retrieve the optimization proposal at a later time, take note of the request's unique identifier, which is given in the header of responses from the /rebalance endpoint. To obtain the response using curl , specify the verbose ( -v ) option: Here is an example header: * Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2023 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001) If an optimization proposal is not ready within the timeout, you can re-submit the POST request, this time including the User-Task-ID of the original request in the header: curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance' What to do Section 13.10, "Approving an optimization proposal" 13.10. Approving an optimization proposal If you are satisfied with your most recently generated optimization proposal, you can instruct Cruise Control to initiate a cluster rebalance and begin reassigning partitions. Leave as little time as possible between generating an optimization proposal and initiating the cluster rebalance. If some time has passed since you generated the original optimization proposal, the cluster state might have changed. Therefore, the cluster rebalance that is initiated might be different to the one you reviewed. If in doubt, first generate a new optimization proposal. Only one cluster rebalance, with a status of "Active", can be in progress at a time. Prerequisites You have generated an optimization proposal from Cruise Control. Procedure Send a POST request to the /rebalance , /add_broker , or /remove_broker endpoint with the dryrun=false parameter: If you used the /add_broker or /remove_broker endpoint to generate a proposal that included or excluded brokers, use the same endpoint to perform the rebalance with or without the specified brokers. Example request to /rebalance curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false' Example request to /add_broker curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?dryrun=false&brokerid=3,4' Example request to /remove_broker curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?dryrun=false&brokerid=3,4' Cruise Control initiates the cluster rebalance and returns the optimization proposal. Check the changes that are summarized in the optimization proposal. If the changes are not what you expect, you can stop the rebalance . Check the progress of the cluster rebalance using the /user_tasks endpoint. The cluster rebalance in progress has a status of "Active". To view all cluster rebalance tasks executed on the Cruise Control server: curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC To view the status of a particular cluster rebalance task, supply the user-task-ids parameter and the task ID: (Optional) Removing brokers when scaling down After a successful rebalance you can stop any brokers you excluded in order to scale down the Kafka cluster. Check that each broker being removed does not have any live partitions in its log ( log.dirs ). ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-deleteUSD' If a log directory does not match the regular expression \.[a-z0-9]-deleteUSD , active partitions are still present. If you have active partitions, check the rebalance has finished or the configuration for the optimization proposal. You can run the proposal again. Make sure that there are no active partitions before moving on to the step. Stop the broker. su - kafka /opt/kafka/bin/kafka-server-stop.sh Confirm that the broker has stopped. jcmd | grep kafka 13.11. Stopping an active cluster rebalance You can stop the cluster rebalance that is currently in progress. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to before the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites A cluster rebalance is in progress (indicated by a status of "Active"). Procedure Send a POST request to the /stop_proposal_execution endpoint: Additional resources Generating optimization proposals | [
"sudo mkdir /opt/cruise-control",
"unzip amq-streams-<version>-cruise-control-bin.zip -d /opt/cruise-control",
"sudo chown -R kafka:kafka /opt/cruise-control",
"/opt/kafka/bin/kafka-server-stop.sh",
"metric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter",
"cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1",
"/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"The Kafka cluster to control. bootstrap.servers=localhost:9092 1 The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 The list of supported goals goals={list of main optimization goals} 5 The list of supported hard goals hard.goals={List of hard goals} 6 How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8",
"cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number>",
"curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state'",
"RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal",
"RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal",
"RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'",
"Optimization has 55 inter-broker replica (12 MB) moves, 0 intra-broker replica (0 MB) moves and 24 leadership moves with a cluster model of 5 recent windows and 100.000% of the partitions covered. Excluded Topics: []. Excluded Brokers For Leadership: []. Excluded Brokers For Replica Move: []. Counts: 3 brokers 343 replicas 7 topics. On-demand Balancedness Score Before (78.012) After (82.912). Provision Status: RIGHT_SIZED.",
"capacity.config.file=config/capacityJBOD.json",
"{ \"brokerCapacities\":[ { \"brokerId\": \"-1\", \"capacity\": { \"DISK\": \"100000\", \"CPU\": \"100\", \"NW_IN\": \"10000\", \"NW_OUT\": \"10000\" }, \"doc\": \"This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB.\" }, { \"brokerId\": \"0\", \"capacity\": { \"DISK\": \"500000\", \"CPU\": \"100\", \"NW_IN\": \"50000\", \"NW_OUT\": \"50000\" }, \"doc\": \"This overrides the capacity for broker 0.\" } ] }",
"opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --describe",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?brokerid=3,4'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?brokerid=3,4'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'",
"* Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2023 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001)",
"curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance'",
"curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?dryrun=false&brokerid=3,4'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?dryrun=false&brokerid=3,4'",
"curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC",
"curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks?user_task_ids=c459316f-9eb5-482f-9d2d-97b5a4cd294d'",
"ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'",
"su - kafka /opt/kafka/bin/kafka-server-stop.sh",
"jcmd | grep kafka",
"curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/stop_proposal_execution'"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/cruise-control-concepts-str |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 3.8-7 Wed Dec 20 2023 Lenka Spackova Fixed broken links and anchors. Revision 3.8-6 Mon Jul 03 2023 Lenka Spackova The rh-mariadb103 Software Collection is EOL. Revision 3.8-5 Tue May 23 2023 Lenka Spackova Added information about Red Hat Developer Toolset 12.1. Revision 3.8-4 Tue Nov 22 2022 Lenka Spackova Added information about Red Hat Developer Toolset 12.0. Revision 3.8-3 Fri Nov 18 2022 Lenka Spackova Updated retired components. Updated component versions available with asynchronous updates. Revision 3.8-2 Thu Sep 29 2022 Lenka Spackova Added Section 1.3.7, "Changes in Apache httpd (asynchronous update)" . Revision 3.8-1 Mon Nov 15 2021 Lenka Spackova Release of Red Hat Software Collections 3.8 Release Notes. Revision 3.8-0 Mon Oct 11 2021 Lenka Spackova Release of Red Hat Software Collections 3.8 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/appe-documentation-3.8_release_notes-revision_history |
Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network | Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network In OpenShift Container Platform 4.15, you can install a cluster on bare metal infrastructure that you provision in a restricted network. Important While you might be able to follow this procedure to deploy a cluster on virtualized or cloud environments, you must be aware of additional considerations for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in such an environment. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 4.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 4.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 4.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 4.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.4.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 4.4.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.4.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.4.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 4.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.4.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.4.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Additional resources Validating DNS resolution for user-provisioned infrastructure 4.4.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.4.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 4.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. Additional resources Verifying node health 4.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Important The ImageContentSourcePolicy file is generated as an output of oc mirror after the mirroring process is finished. The oc mirror command generates an ImageContentSourcePolicy file which contains the YAML needed to define ImageContentSourcePolicy . Copy the text from this file and paste it into your install-config.yaml file. You must run the 'oc mirror' command twice. The first time you run the oc mirror command, you get a full ImageContentSourcePolicy file. The second time you run the oc mirror command, you only get the difference between the first and second run. Because of this behavior, you must always keep a backup of these files in case you need to merge them into one complete ImageContentSourcePolicy file. Keeping a backup of these two output files ensures that you have a complete ImageContentSourcePolicy file. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 4.8.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Provide the contents of the certificate file that you used for your mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 4.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note For bare metal installations, if you do not assign node IP addresses from the range that is specified in the networking.machineNetwork[].cidr field in the install-config.yaml file, you must include them in the proxy.noProxy field. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.9. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources See Recovering from expired control plane certificates for more information about recovering kubelet certificates. 4.10. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 4.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 4.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 4.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 4.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 4.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 4.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 4.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 4.11.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.15 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 4.11.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 4.11.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 4.11.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 4.11.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 4.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.11.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 4.11.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 4.11.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 4.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.11.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 4.11.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.11.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 4.11.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 4.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 4.11.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 4.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 4.11.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 4.11.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 4.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 4.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 4.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.15.2.1. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 4.15.2.2. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.15.2.3. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.15.2.4. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 4.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.18. steps Validating an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_bare_metal/installing-restricted-networks-bare-metal |
Chapter 5. Ceph on-wire encryption | Chapter 5. Ceph on-wire encryption Starting with Red Hat Ceph Storage 4 and later, you can enable encryption for all Ceph traffic over the network with the introduction of the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, giving you end-to-end encryption. The second version of Ceph's on-wire protocol, msgr2 , includes several new features: A secure mode encrypting all data moving through the network. Encapsulation improvement of authentication payloads. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy, v1-compatible, and the new, v2-compatible, Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon will try to use the v2 protocol first, if possible, but if not, then the legacy v1 protocol will be used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx . Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx . Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . Ensure that you consider cluster CPU requirements when you plan the Red Hat Ceph Storage cluster, to include encryption overhead. Important Using secure mode is currently supported by Ceph kernel clients, such as CephFS and krbd on Red Hat Enterprise Linux 8.2. Using secure mode is supported by Ceph clients using librbd , such as OpenStack Nova, Glance, and Cinder. Address Changes For both versions of the messenger protocol to coexist in the same storage cluster, the address formatting has changed: Old address format was, IP_ADDR : PORT / CLIENT_ID , for example, 1.2.3.4:5678/91011 . New address format is, PROTOCOL_VERSION : IP_ADDR : PORT / CLIENT_ID , for example, v2:1.2.3.4:5678/91011 , where PROTOCOL_VERSION can be either v1 or v2 . Because the Ceph daemons now bind to multiple ports, the daemons display multiple addresses instead of a single address. Here is an example from a dump of the monitor map: Also, the mon_host configuration option and specifying addresses on the command line, using -m , supports the new address format. Connection Phases There are four phases for making an encrypted connection: Banner On connection, both the client and the server send a banner. Currently, the Ceph banner is ceph 0 0n . Authentication Exchange All data, sent or received, is contained in a frame for the duration of the connection. The server decides if authentication has completed, and what the connection mode will be. The frame format is fixed, and can be in three different forms depending on the authentication flags being used. Message Flow Handshake Exchange The peers identify each other and establish a session. The client sends the first message, and the server will reply with the same message. The server can close connections if the client talks to the wrong daemon. For new sessions, the client and server proceed to exchanging messages. Client cookies are used to identify a session, and can reconnect to an existing session. Message Exchange The client and server start exchanging messages, until the connection is closed. Additional Resources See the Red Hat Ceph Storage Data Security and Hardening Guide for details on enabling the msgr2 protocol. | [
"epoch 1 fsid 50fcf227-be32-4bcb-8b41-34ca8370bd17 last_changed 2019-12-12 11:10:46.700821 created 2019-12-12 11:10:46.700821 min_mon_release 14 (nautilus) 0: [v2:10.0.0.10:3300/0,v1:10.0.0.10:6789/0] mon.a 1: [v2:10.0.0.11:3300/0,v1:10.0.0.11:6789/0] mon.b 2: [v2:10.0.0.12:3300/0,v1:10.0.0.12:6789/0] mon.c"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/architecture_guide/ceph-on-wire-encryption_arch |
Installing on-premise with Assisted Installer | Installing on-premise with Assisted Installer OpenShift Container Platform 4.18 Installing OpenShift Container Platform on-premise with the Assisted Installer Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on-premise_with_assisted_installer/index |
Chapter 2. Customizing the dashboard | Chapter 2. Customizing the dashboard The Red Hat OpenStack Platform (RHOSP) dashboard (horizon) uses a default theme (RCUE), which is stored inside the horizon container. You can add your own theme to the container image and customize certain parameters to change the look and feel of the following dashboard elements: Logo Site colors Stylesheets HTML title Site branding link Help URL Note To ensure continued support for modified RHOSP container images, the resulting images must comply with the Red Hat Container Support Policy . 2.1. Obtaining the horizon container image To obtain a copy of the horizon container image, pull the image either into the undercloud or a separate client system that is running podman. Procedure Pull the horizon container image: You can use this image as a basis for a modified image. 2.2. Obtaining the RCUE theme The horizon container image uses the Red Hat branded RCUE theme by default. You can use this theme as a basis for your own theme and extract a copy from the container image. Procedure Create a directory for your theme: Start a container that executes a null loop. For example, run the following command: Copy the RCUE theme from the container to your local directory: Terminate the container: Result: You now have a local copy of the RCUE theme. 2.3. Creating your own theme based on RCUE To use RCUE as a basis, copy the entire RCUE theme directory rcue to a new location. This procedure uses mytheme as an example name. Procedure Copy the theme: To change the colors, graphics, fonts, and other elements of a theme, edit the files in mytheme. When you edit this theme, check for all instances of rcue including paths, files, and directories to ensure that you change them to the new mytheme name. 2.4. Creating a file to enable your theme and customize the dashboard To enable your theme in the dashboard container, you must create a file to override the AVAILABLE_THEMES parameter. This procedure uses mytheme as an example name. Procedure Create a new file called _12_mytheme_theme.py in the horizon-themes directory and add the following content: The 12 in the file name ensures this file is loaded after the RCUE file, which uses 11 , and overrides the AVAILABLE_THEMES parameter. Optional: You can also set custom parameters in the _12_mytheme_theme.py file. Use the following examples as a guide: SITE_BRANDING Set the HTML title that appears at the top of the browser window. SITE_BRANDING_LINK Changes the hyperlink of the theme logo, which normally redirects to horizon:user_home by default. 2.5. Generating a modified horizon image When your custom theme is ready, you can create a new container image that uses your theme. This procedure uses mytheme as an example name. Procedure Use a Dockerfile to generate a new container image using the original horizon image as a basis, as shown in the following example: FROM registry.redhat.io/rhosp-rhel8/openstack-horizon MAINTAINER Acme LABEL name="rhosp-rhel8/openstack-horizon-mytheme" vendor="Acme" version="0" release="1" COPY mytheme /usr/share/openstack-dashboard/openstack_dashboard/themes/mytheme COPY _12_mytheme_theme.py /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py RUN sudo chown apache:apache /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py Save this file in your horizon-themes directory as Dockerfile . Use the Dockerfile to generate the new image: USD sudo podman build . -t "172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5" --log-level debug The -t option names and tags the resulting image. It uses the following syntax: LOCATION This is usually the location of the container registry that the overcloud eventually uses to pull images. In this instance, you push this image to the container registry of the undercloud, so set this to the undercloud IP and port. NAME For consistency, this is usually the same name as the original container image followed by the name of your theme. In this instance, it is rhosp-rhel8/openstack-horizon-mytheme . TAG The tag for the image. Red Hat uses the version and release labels as a basis for this tag. If you generate a new version of this image, increment the release , for example, 0-2 . Push the image to the container registry of the undercloud: USD sudo openstack tripleo container image push --local 172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5 Verify that the image has uploaded to the local registry: [stack@director horizon-themes]USD curl http://172.24.10.10:8787/v2/_catalog | jq .repositories[] | grep -i hori "rhosp-rhel8/openstack-horizon" [stack@director horizon-themes]USD [stack@director ~]USD sudo openstack tripleo container image list | grep hor | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:16.0-84 | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:0-5 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<,Uploaded [stack@director ~]USD Important If you update or upgrade Red Hat OpenStack Platform, you must reapply the theme to the new horizon image and push a new version of the modified image to the undercloud. 2.6. Using the modified container image in the overcloud To use the container image that you modified with your overcloud deployment, edit the environment file that contains the list of container image locations. This environment file is usually named overcloud-images.yaml . This procedure uses mytheme as an example name. Procedure Edit the DockerHorizonConfigImage and DockerHorizonImage parameters to point to your modified container image: Save this new version of the overcloud-images.yaml file. 2.7. Editing puppet parameters Director provides a set of dashboard parameters that you can modify with environment files. Procedure Use the ExtraConfig parameter to set Puppet hieradata. For example, the default help URL points to https://access.redhat.com/documentation/en/red-hat-openstack-platform . To modify this URL, use the following environment file content and replace the URL: Additional resources Dashboard parameters 2.8. Deploying an overcloud with a customized dashboard Procedure To deploy the overcloud with your dashboard customizations, include the following environment files in the openstack overcloud deploy command: The environment file with your modified container image locations. The environment file with additional dashboard modifications. Any other environment files that are relevant to your overcloud configuration. | [
"sudo podman pull registry.redhat.io/rhosp-rhel8/openstack-horizon:16.2",
"mkdir ~/horizon-themes cd ~/horizon-themes",
"sudo podman run --rm -d --name horizon-temp registry.redhat.io/rhosp-rhel8/openstack-horizon:16.2 /usr/bin/sleep infinity",
"sudo podman cp horizon-temp:/usr/share/openstack-dashboard/openstack_dashboard/themes/rcue .",
"sudo podman kill horizon-temp",
"cp -r rcue mytheme",
"AVAILABLE_THEMES = [('mytheme', 'My Custom Theme', 'themes/mytheme')]",
"SITE_BRANDING = \"Example, Inc. Cloud\"",
"SITE_BRANDING_LINK = \"http://example.com\"",
"FROM registry.redhat.io/rhosp-rhel8/openstack-horizon MAINTAINER Acme LABEL name=\"rhosp-rhel8/openstack-horizon-mytheme\" vendor=\"Acme\" version=\"0\" release=\"1\" COPY mytheme /usr/share/openstack-dashboard/openstack_dashboard/themes/mytheme COPY _12_mytheme_theme.py /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py RUN sudo chown apache:apache /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py",
"sudo podman build . -t \"172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5\" --log-level debug",
"[LOCATION]/[NAME]:[TAG]",
"sudo openstack tripleo container image push --local 172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5",
"[stack@director horizon-themes]USD curl http://172.24.10.10:8787/v2/_catalog | jq .repositories[] | grep -i hori \"rhosp-rhel8/openstack-horizon\" [stack@director horizon-themes]USD [stack@director ~]USD sudo openstack tripleo container image list | grep hor | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:16.0-84 | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:0-5 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<,Uploaded [stack@director ~]USD",
"parameter_defaults: ContainerHorizonConfigImage: 192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1 ContainerHorizonImage: 192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1",
"parameter_defaults: ExtraConfig: horizon::help_url: \"http://openstack.example.com\"",
"openstack overcloud deploy --templates -e /home/stack/templates/overcloud-images.yaml -e /home/stack/templates/help_url.yaml [OTHER OPTIONS]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/introduction_to_the_openstack_dashboard/customizing-the-dashboard_osp |
Chapter 58. SignatureIntegrationService | Chapter 58. SignatureIntegrationService 58.1. ListSignatureIntegrations GET /v1/signatureintegrations 58.1.1. Description 58.1.2. Parameters 58.1.3. Return Type V1ListSignatureIntegrationsResponse 58.1.4. Content Type application/json 58.1.5. Responses Table 58.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListSignatureIntegrationsResponse 0 An unexpected error response. GooglerpcStatus 58.1.6. Samples 58.1.7. Common object reference 58.1.7.1. CosignPublicKeyVerificationPublicKey Field Name Required Nullable Type Description Format name String publicKeyPemEnc String 58.1.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 58.1.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 58.1.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 58.1.7.4. StorageCosignCertificateVerification Holds all verification data for verifying certificates attached to cosign signatures. If only the certificate is given, the Fulcio trusted root chain will be assumed and verified against. If only the chain is given, this will be used over the Fulcio trusted root chain for verification. If no certificate or chain is given, the Fulcio trusted root chain will be assumed and verified against. Field Name Required Nullable Type Description Format certificatePemEnc String PEM encoded certificate to use for verification. certificateChainPemEnc String PEM encoded certificate chain to use for verification. certificateOidcIssuer String Certificate OIDC issuer to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an OIDC issuer, you may use '.*' as the OIDC issuer. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . certificateIdentity String Certificate identity to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an identity, you may use '.*' as the identity. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . 58.1.7.5. StorageCosignPublicKeyVerification Field Name Required Nullable Type Description Format publicKeys List of CosignPublicKeyVerificationPublicKey 58.1.7.6. StorageSignatureIntegration Field Name Required Nullable Type Description Format id String name String cosign StorageCosignPublicKeyVerification cosignCertificates List of StorageCosignCertificateVerification 58.1.7.7. V1ListSignatureIntegrationsResponse Field Name Required Nullable Type Description Format integrations List of StorageSignatureIntegration 58.2. DeleteSignatureIntegration DELETE /v1/signatureintegrations/{id} 58.2.1. Description 58.2.2. Parameters 58.2.2.1. Path Parameters Name Description Required Default Pattern id X null 58.2.3. Return Type Object 58.2.4. Content Type application/json 58.2.5. Responses Table 58.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 58.2.6. Samples 58.2.7. Common object reference 58.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 58.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 58.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 58.3. GetSignatureIntegration GET /v1/signatureintegrations/{id} 58.3.1. Description 58.3.2. Parameters 58.3.2.1. Path Parameters Name Description Required Default Pattern id X null 58.3.3. Return Type StorageSignatureIntegration 58.3.4. Content Type application/json 58.3.5. Responses Table 58.3. HTTP Response Codes Code Message Datatype 200 A successful response. StorageSignatureIntegration 0 An unexpected error response. GooglerpcStatus 58.3.6. Samples 58.3.7. Common object reference 58.3.7.1. CosignPublicKeyVerificationPublicKey Field Name Required Nullable Type Description Format name String publicKeyPemEnc String 58.3.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 58.3.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 58.3.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 58.3.7.4. StorageCosignCertificateVerification Holds all verification data for verifying certificates attached to cosign signatures. If only the certificate is given, the Fulcio trusted root chain will be assumed and verified against. If only the chain is given, this will be used over the Fulcio trusted root chain for verification. If no certificate or chain is given, the Fulcio trusted root chain will be assumed and verified against. Field Name Required Nullable Type Description Format certificatePemEnc String PEM encoded certificate to use for verification. certificateChainPemEnc String PEM encoded certificate chain to use for verification. certificateOidcIssuer String Certificate OIDC issuer to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an OIDC issuer, you may use '.*' as the OIDC issuer. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . certificateIdentity String Certificate identity to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an identity, you may use '.*' as the identity. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . 58.3.7.5. StorageCosignPublicKeyVerification Field Name Required Nullable Type Description Format publicKeys List of CosignPublicKeyVerificationPublicKey 58.3.7.6. StorageSignatureIntegration Field Name Required Nullable Type Description Format id String name String cosign StorageCosignPublicKeyVerification cosignCertificates List of StorageCosignCertificateVerification 58.4. PutSignatureIntegration PUT /v1/signatureintegrations/{id} 58.4.1. Description 58.4.2. Parameters 58.4.2.1. Path Parameters Name Description Required Default Pattern id X null 58.4.2.2. Body Parameter Name Description Required Default Pattern body SignatureIntegrationServicePutSignatureIntegrationBody X 58.4.3. Return Type Object 58.4.4. Content Type application/json 58.4.5. Responses Table 58.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 58.4.6. Samples 58.4.7. Common object reference 58.4.7.1. CosignPublicKeyVerificationPublicKey Field Name Required Nullable Type Description Format name String publicKeyPemEnc String 58.4.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 58.4.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 58.4.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 58.4.7.4. SignatureIntegrationServicePutSignatureIntegrationBody Field Name Required Nullable Type Description Format name String cosign StorageCosignPublicKeyVerification cosignCertificates List of StorageCosignCertificateVerification 58.4.7.5. StorageCosignCertificateVerification Holds all verification data for verifying certificates attached to cosign signatures. If only the certificate is given, the Fulcio trusted root chain will be assumed and verified against. If only the chain is given, this will be used over the Fulcio trusted root chain for verification. If no certificate or chain is given, the Fulcio trusted root chain will be assumed and verified against. Field Name Required Nullable Type Description Format certificatePemEnc String PEM encoded certificate to use for verification. certificateChainPemEnc String PEM encoded certificate chain to use for verification. certificateOidcIssuer String Certificate OIDC issuer to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an OIDC issuer, you may use '.*' as the OIDC issuer. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . certificateIdentity String Certificate identity to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an identity, you may use '.*' as the identity. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . 58.4.7.6. StorageCosignPublicKeyVerification Field Name Required Nullable Type Description Format publicKeys List of CosignPublicKeyVerificationPublicKey 58.5. PostSignatureIntegration POST /v1/signatureintegrations Integration id should not be set. Returns signature integration with id filled. 58.5.1. Description 58.5.2. Parameters 58.5.2.1. Body Parameter Name Description Required Default Pattern body StorageSignatureIntegration X 58.5.3. Return Type StorageSignatureIntegration 58.5.4. Content Type application/json 58.5.5. Responses Table 58.5. HTTP Response Codes Code Message Datatype 200 A successful response. StorageSignatureIntegration 0 An unexpected error response. GooglerpcStatus 58.5.6. Samples 58.5.7. Common object reference 58.5.7.1. CosignPublicKeyVerificationPublicKey Field Name Required Nullable Type Description Format name String publicKeyPemEnc String 58.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 58.5.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 58.5.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 58.5.7.4. StorageCosignCertificateVerification Holds all verification data for verifying certificates attached to cosign signatures. If only the certificate is given, the Fulcio trusted root chain will be assumed and verified against. If only the chain is given, this will be used over the Fulcio trusted root chain for verification. If no certificate or chain is given, the Fulcio trusted root chain will be assumed and verified against. Field Name Required Nullable Type Description Format certificatePemEnc String PEM encoded certificate to use for verification. certificateChainPemEnc String PEM encoded certificate chain to use for verification. certificateOidcIssuer String Certificate OIDC issuer to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an OIDC issuer, you may use '.*' as the OIDC issuer. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . certificateIdentity String Certificate identity to verify against. This supports regular expressions following the RE2 syntax: https://github.com/google/re2/wiki/Syntax . In case the certificate does not specify an identity, you may use '.*' as the identity. However, it is recommended to use Fulcio compatible certificates according to the specification: https://github.com/sigstore/fulcio/blob/main/docs/certificate-specification.md . 58.5.7.5. StorageCosignPublicKeyVerification Field Name Required Nullable Type Description Format publicKeys List of CosignPublicKeyVerificationPublicKey 58.5.7.6. StorageSignatureIntegration Field Name Required Nullable Type Description Format id String name String cosign StorageCosignPublicKeyVerification cosignCertificates List of StorageCosignCertificateVerification | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/signatureintegrationservice |
Chapter 10. Customizing the Quick access card | Chapter 10. Customizing the Quick access card To access the Home page in Red Hat Developer Hub, the base URL must include the /developer-hub proxy. You can configure the Home page by passing the data into the app-config.yaml file as a proxy. You can provide data to the Home page from the following sources: JSON files hosted on GitHub or GitLab. A dedicated service that provides the Home page data in JSON format using an API. 10.1. Using hosted JSON files to provide data to the Quick access card Prerequisites You have installed Red Hat Developer Hub by using either the Operator or Helm chart. See Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To access the data from the JSON files, add the following code to the Developer Hub app-config.yaml configuration file: Add the following code to the app-config.yaml file: proxy: endpoints: # Other Proxies # customize developer hub instance '/developer-hub': target: <DOMAIN_URL> # i.e https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub': <path to json file> # i.e /redhat-developer/rhdh/main/packages/app/public/homepage/data.json changeOrigin: true secure: true # Change to "false" in case of using self hosted cluster with a self-signed certificate headers: <HEADER_KEY>: <HEADER_VALUE> # optional and can be passed as needed i.e Authorization can be passed for private GitHub repo and PRIVATE-TOKEN can be passed for private GitLab repo 10.2. Using a dedicated service to provide data to the Quick access card When using a dedicated service, you can do the following: Use the same service to provide the data to all configurable Developer Hub pages or use a different service for each page. Use the red-hat-developer-hub-customization-provider as an example service, which provides data for both the Home and Tech Radar pages. The red-hat-developer-hub-customization-provider service provides the same data as default Developer Hub data. You can fork the red-hat-developer-hub-customization-provider service repository from GitHub and modify it with your own data, if required. Deploy the red-hat-developer-hub-customization-provider service and the Developer Hub Helm chart on the same cluster. Prerequisites You have installed the Red Hat Developer Hub using Helm Chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart . Procedure To use a separate service to provide the Home page data, complete the following steps: From the Developer perspective in the Red Hat OpenShift Container Platform web console, click +Add > Import from Git . Enter the URL of your Git repository into the Git Repo URL field. To use the red-hat-developer-hub-customization-provider service, add the URL for the red-hat-developer-hub-customization-provider repository or your fork of the repository containing your customizations. On the General tab, enter red-hat-developer-hub-customization-provider in the Name field and click Create . On the Advanced Options tab, copy the value from the Target Port . Note The Target Port automatically generates a Kubernetes or OpenShift Container Platform service to communicate with. Add the following code to the app-config-rhdh.yaml file: proxy: endpoints: # Other Proxies # customize developer hub instance '/developer-hub': target: USD{HOMEPAGE_DATA_URL} changeOrigin: true # Change to "false" in case of using self-hosted cluster with a self-signed certificate secure: true where HOMEPAGE_DATA_URL is defined as http://<SERVICE_NAME>:8080 , for example, http://rhdh-customization-provider:8080 . Note The red-hat-developer-hub-customization-provider service contains the 8080 port by default. If you are using a custom port, you can specify it with the 'PORT' environmental variable in the app-config-rhdh.yaml file. Replace the HOMEPAGE_DATA_URL by adding the URL to rhdh-secrets or by directly replacing it in your custom ConfigMap. Delete the Developer Hub pod to ensure that the new configurations are loaded correctly. Verification To view the service, navigate to the Administrator perspective in the OpenShift Container Platform web console and click Networking > Service . Note You can also view the Service Resources in the Topology view. Ensure that the provided API URL for the Home page returns the data in JSON format as shown in the following example: [ { "title": "Dropdown 1", "isExpanded": false, "links": [ { "iconUrl": "https://imagehost.com/image.png", "label": "Dropdown 1 Item 1", "url": "https://example.com/" }, { "iconUrl": "https://imagehost2.org/icon.png", "label": "Dropdown 1 Item 2", "url": "" } ] }, { "title": "Dropdown 2", "isExpanded": true, "links": [ { "iconUrl": "http://imagehost3.edu/img.jpg", "label": "Dropdown 2 Item 1", "url": "http://example.com" } ] } ] Note If the request call fails or is not configured, the Developer Hub instance falls back to the default local data. If the images or icons do not load, then allowlist them by adding your image or icon host URLs to the content security policy's (csp) img-src in your custom ConfigMap as follows: kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: | app: title: Red Hat Developer Hub backend: csp: connect-src: - "'self'" - 'http:' - 'https:' img-src: - "'self'" - 'data:' - <image host url 1> - <image host url 2> - <image host url 3> # Other Configurations | [
"proxy: endpoints: # Other Proxies # customize developer hub instance '/developer-hub': target: <DOMAIN_URL> # i.e https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub': <path to json file> # i.e /redhat-developer/rhdh/main/packages/app/public/homepage/data.json changeOrigin: true secure: true # Change to \"false\" in case of using self hosted cluster with a self-signed certificate headers: <HEADER_KEY>: <HEADER_VALUE> # optional and can be passed as needed i.e Authorization can be passed for private GitHub repo and PRIVATE-TOKEN can be passed for private GitLab repo",
"proxy: endpoints: # Other Proxies # customize developer hub instance '/developer-hub': target: USD{HOMEPAGE_DATA_URL} changeOrigin: true # Change to \"false\" in case of using self-hosted cluster with a self-signed certificate secure: true",
"[ { \"title\": \"Dropdown 1\", \"isExpanded\": false, \"links\": [ { \"iconUrl\": \"https://imagehost.com/image.png\", \"label\": \"Dropdown 1 Item 1\", \"url\": \"https://example.com/\" }, { \"iconUrl\": \"https://imagehost2.org/icon.png\", \"label\": \"Dropdown 1 Item 2\", \"url\": \"\" } ] }, { \"title\": \"Dropdown 2\", \"isExpanded\": true, \"links\": [ { \"iconUrl\": \"http://imagehost3.edu/img.jpg\", \"label\": \"Dropdown 2 Item 1\", \"url\": \"http://example.com\" } ] } ]",
"kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: | app: title: Red Hat Developer Hub backend: csp: connect-src: - \"'self'\" - 'http:' - 'https:' img-src: - \"'self'\" - 'data:' - <image host url 1> - <image host url 2> - <image host url 3> # Other Configurations"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/customizing/customizing-the-quick-access-card |
Chapter 4. Virtualization Restrictions | Chapter 4. Virtualization Restrictions This chapter covers additional support and product restrictions of the virtualization packages in Red Hat Enterprise Linux 6. 4.1. KVM Restrictions The following restrictions apply to the KVM hypervisor: Maximum vCPUs per guest The maximum amount of virtual CPUs that is supported per guest varies depending on which minor version of Red Hat Enterprise Linux 6 you are using as a host machine. The release of 6.0 introduced a maximum of 64, while 6.3 introduced a maximum of 160. As of version 6.7, a maximum of 240 virtual CPUs per guest is supported. Constant TSC bit Systems without a Constant Time Stamp Counter require additional configuration. Refer to Chapter 14, KVM Guest Timing Management for details on determining whether you have a Constant Time Stamp Counter and configuration steps for fixing any related issues. Virtualized SCSI devices SCSI emulation is not supported with KVM in Red Hat Enterprise Linux. Virtualized IDE devices KVM is limited to a maximum of four virtualized (emulated) IDE devices per guest virtual machine. Migration restrictions Device assignment refers to physical devices that have been exposed to a virtual machine, for the exclusive use of that virtual machine. Because device assignment uses hardware on the specific host where the virtual machine runs, migration and save/restore are not supported when device assignment is in use. If the guest operating system supports hot plugging, assigned devices can be removed prior to the migration or save/restore operation to enable this feature. Live migration is only possible between hosts with the same CPU type (that is, Intel to Intel or AMD to AMD only). For live migration, both hosts must have the same value set for the No eXecution (NX) bit, either on or off . For migration to work, cache=none must be specified for all block devices opened in write mode. Warning Failing to include the cache=none option can result in disk corruption. Storage restrictions There are risks associated with giving guest virtual machines write access to entire disks or block devices (such as /dev/sdb ). If a guest virtual machine has access to an entire block device, it can share any volume label or partition table with the host machine. If bugs exist in the host system's partition recognition code, this can create a security risk. Avoid this risk by configuring the host machine to ignore devices assigned to a guest virtual machine. Warning Failing to adhere to storage restrictions can result in risks to security. Core dumping restrictions Core dumping uses the same infrastructure as migration and requires more device knowledge and control than device assignement can provide. Therefore, core dumping is not supported when device assignment is in use. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Virtualization_Restrictions |
Chapter 5. Installing the Virtualization Packages | Chapter 5. Installing the Virtualization Packages Before you can use virtualization, the virtualization packages must be installed on your computer. Virtualization packages can be installed either during the host installation sequence or after host installation using Subscription Manager. The KVM hypervisor uses the default Red Hat Enterprise Linux kernel with the kvm kernel module. 5.1. Configuring a Virtualization Host Installation This section covers installing virtualization tools and virtualization packages as part of a fresh Red Hat Enterprise Linux installation. Note The Red Hat Enterprise Linux Installation Guide covers installing Red Hat Enterprise Linux in detail. Procedure 5.1. Installing the virtualization package group Launch the Red Hat Enterprise Linux 6 installation program Start an interactive Red Hat Enterprise Linux 6 installation from the Red Hat Enterprise Linux Installation CD-ROM, DVD or PXE. Continue installation up to package selection Complete the other steps up to the package selection step. Figure 5.1. The Red Hat Enterprise Linux package selection screen Select the Virtualization Host server role to install a platform for guest virtual machines. Alternatively, ensure that the Customize Now radio button is selected before proceeding, to specify individual packages. Select the Virtualization package group This selects the qemu-kvm emulator, virt-manager , libvirt and virt-viewer for installation. Figure 5.2. The Red Hat Enterprise Linux package selection screen Note If you wish to create virtual machines in a graphical user interface ( virt-manager ) later, you should also select the General Purpose Desktop package group. Customize the packages (if required) Customize the Virtualization group if you require other virtualization packages. Figure 5.3. The Red Hat Enterprise Linux package selection screen Click on the Close button, then the button to continue the installation. When the installation is complete, reboot the system. Important You require a valid virtualization entitlement to receive updates for the virtualization packages. Installing KVM Packages with Kickstart Files Kickstart files allow for large, automated installations without a user manually installing each individual host system. This section describes how to create and use a Kickstart file to install Red Hat Enterprise Linux with the Virtualization packages. In the %packages section of your Kickstart file, append the following package groups: For more information about Kickstart files, refer to the Red Hat Enterprise Linux Installation Guide . | [
"@virtualization @virtualization-client @virtualization-platform @virtualization-tools"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-host_installation |
Chapter 61. System and Subscription Management | Chapter 61. System and Subscription Management Red Hat Satellite 5.8 availability of RHEL 7.6 EUS, AUS, TUS, and E4S streams delayed Red Hat Satellite 5 content ISOs are made available on a monthly cadence. Based on this cadence, content ISOs are not available through Red Hat Satellite 5.8 for the following RHEL 7.6 streams at the time of the RHEL 7.6 general availability: Extended Update Support (EUS) Advanced Update Support (AUS) Telco Extended Update Support (TUS) Update Services for SAP Solutions (E4S) The expected delay is two to four weeks. Note that Red Hat Satellite 6 instances are unaffected. See https://access.redhat.com/solutions/3621151 for more details. (BZ#1635135) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/known_issues_system_and_subscription_management |
Chapter 5. Red Hat Virtualization 4.4 Batch Update 3 (ovirt-4.4.4) | Chapter 5. Red Hat Virtualization 4.4 Batch Update 3 (ovirt-4.4.4) 5.1. Red Hat Virtualization Manager 4.4 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the Red Hat Virtualization Manager 4.4.4 image. Table 5.1. Red Hat Virtualization Manager 4.4 for RHEL 8 x86_64 (RPMs) Name Version GConf2 3.2.6-22.el8.x86_64 NetworkManager 1.26.0-13.el8_3.x86_64 NetworkManager-libnm 1.26.0-13.el8_3.x86_64 NetworkManager-team 1.26.0-13.el8_3.x86_64 NetworkManager-tui 1.26.0-13.el8_3.x86_64 PackageKit 1.1.12-6.el8.x86_64 PackageKit-glib 1.1.12-6.el8.x86_64 abattis-cantarell-fonts 0.0.25-4.el8.noarch acl 2.2.53-1.el8.x86_64 adobe-mappings-cmap 20171205-3.el8.noarch adobe-mappings-cmap-deprecated 20171205-3.el8.noarch adobe-mappings-pdf 20180407-1.el8.noarch aide 0.16-14.el8.x86_64 alsa-lib 1.2.3.2-1.el8.x86_64 ansible 2.9.15-1.el8ae.noarch ansible-runner-service 1.0.6-3.el8ev.noarch aopalliance 1.0-17.module+el8+2598+06babf2e.noarch apache-commons-codec 1.11-3.module+el8+2598+06babf2e.noarch apache-commons-collections 3.2.2-10.module+el8.1.0+3366+6dfb954c.noarch apache-commons-compress 1.18-1.el8ev.noarch apache-commons-configuration 1.10-1.el8ev.noarch apache-commons-io 2.6-3.module+el8+2598+06babf2e.noarch apache-commons-jxpath 1.3-29.el8ev.noarch apache-commons-lang 2.6-21.module+el8.1.0+3366+6dfb954c.noarch apache-commons-logging 1.2-13.module+el8+2598+06babf2e.noarch apache-sshd 2.5.1-1.el8ev.noarch apr 1.6.3-11.el8.x86_64 apr-util 1.6.1-6.el8.x86_64 asciidoc 8.6.10-0.5.20180627gitf7c2274.el8.noarch atk 2.28.1-1.el8.x86_64 audit 3.0-0.17.20191104git1c2f876.el8.x86_64 audit-libs 3.0-0.17.20191104git1c2f876.el8.x86_64 authselect 1.2.1-2.el8.x86_64 authselect-compat 1.2.1-2.el8.x86_64 authselect-libs 1.2.1-2.el8.x86_64 autogen-libopts 5.18.12-8.el8.x86_64 avahi-libs 0.7-19.el8.x86_64 basesystem 11-5.el8.noarch bash 4.4.19-12.el8.x86_64 bea-stax-api 1.2.0-16.module+el8.1.0+3366+6dfb954c.noarch bind-export-libs 9.11.20-5.el8_3.1.x86_64 bind-libs 9.11.20-5.el8_3.1.x86_64 bind-libs-lite 9.11.20-5.el8_3.1.x86_64 bind-license 9.11.20-5.el8_3.1.noarch bind-utils 9.11.20-5.el8_3.1.x86_64 binutils 2.30-79.el8.x86_64 biosdevname 0.7.3-2.el8.x86_64 boost-regex 1.66.0-10.el8.x86_64 brotli 1.0.6-2.el8.x86_64 bzip2 1.0.6-26.el8.x86_64 bzip2-libs 1.0.6-26.el8.x86_64 c-ares 1.13.0-5.el8.x86_64 ca-certificates 2020.2.41-80.0.el8_2.noarch cairo 1.15.12-3.el8.x86_64 cairo-gobject 1.15.12-3.el8.x86_64 checkpolicy 2.9-1.el8.x86_64 chkconfig 1.13-2.el8.x86_64 chrony 3.5-1.el8.x86_64 cjose 0.6.1-2.module+el8+2454+f890a43a.x86_64 cloud-init 19.4-11.el8_3.2.noarch cloud-utils-growpart 0.31-1.el8.noarch cockpit 224.2-1.el8.x86_64 cockpit-bridge 224.2-1.el8.x86_64 cockpit-dashboard 224.2-1.el8.noarch cockpit-packagekit 224.2-1.el8.noarch cockpit-system 224.2-1.el8.noarch cockpit-ws 224.2-1.el8.x86_64 collectd 5.11.0-2.el8ost.x86_64 collectd-disk 5.11.0-2.el8ost.x86_64 collectd-postgresql 5.11.0-2.el8ost.x86_64 collectd-write_http 5.11.0-2.el8ost.x86_64 collectd-write_syslog 5.11.0-2.el8ost.x86_64 copy-jdk-configs 3.7-4.el8.noarch coreutils 8.30-8.el8.x86_64 coreutils-common 8.30-8.el8.x86_64 cpio 2.12-8.el8.x86_64 cracklib 2.9.6-15.el8.x86_64 cracklib-dicts 2.9.6-15.el8.x86_64 cronie 1.5.2-4.el8.x86_64 cronie-anacron 1.5.2-4.el8.x86_64 crontabs 1.11-16.20150630git.el8.noarch crypto-policies 20200713-1.git51d1222.el8.noarch crypto-policies-scripts 20200713-1.git51d1222.el8.noarch cryptsetup-libs 2.3.3-2.el8.x86_64 ctags 5.8-22.el8.x86_64 cups-libs 2.2.6-38.el8.x86_64 curl 7.61.1-14.el8_3.1.x86_64 cyrus-sasl-lib 2.1.27-5.el8.x86_64 dbus 1.12.8-12.el8_3.x86_64 dbus-common 1.12.8-12.el8_3.noarch dbus-daemon 1.12.8-12.el8_3.x86_64 dbus-glib 0.110-2.el8.x86_64 dbus-libs 1.12.8-12.el8_3.x86_64 dbus-tools 1.12.8-12.el8_3.x86_64 dejavu-fonts-common 2.35-6.el8.noarch dejavu-sans-mono-fonts 2.35-6.el8.noarch device-mapper 1.02.171-5.el8_3.2.x86_64 device-mapper-event 1.02.171-5.el8_3.2.x86_64 device-mapper-event-libs 1.02.171-5.el8_3.2.x86_64 device-mapper-libs 1.02.171-5.el8_3.2.x86_64 device-mapper-persistent-data 0.8.5-4.el8.x86_64 dhcp-client 4.3.6-41.el8.x86_64 dhcp-common 4.3.6-41.el8.noarch dhcp-libs 4.3.6-41.el8.x86_64 diffutils 3.6-6.el8.x86_64 dmidecode 3.2-6.el8.x86_64 dnf 4.2.23-4.el8.noarch dnf-data 4.2.23-4.el8.noarch dnf-plugin-subscription-manager 1.27.18-1.el8_3.x86_64 dnf-plugins-core 4.0.17-5.el8.noarch docbook-dtds 1.0-69.el8.noarch docbook-style-xsl 1.79.2-7.el8.noarch dracut 049-95.git20200804.el8_3.4.x86_64 dracut-config-generic 049-95.git20200804.el8_3.4.x86_64 dracut-config-rescue 049-95.git20200804.el8_3.4.x86_64 dracut-network 049-95.git20200804.el8_3.4.x86_64 dracut-squash 049-95.git20200804.el8_3.4.x86_64 dwz 0.12-9.el8.x86_64 e2fsprogs 1.45.6-1.el8.x86_64 e2fsprogs-libs 1.45.6-1.el8.x86_64 eap7-FastInfoset 1.2.13-10.redhat_1.1.el8eap.noarch eap7-activemq-artemis-cli 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-commons 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-core-client 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-dto 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-hornetq-protocol 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-hqclient-protocol 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-jdbc-store 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-jms-client 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-jms-server 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-journal 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-native 1.0.2-1.redhat_00001.1.el8eap.noarch eap7-activemq-artemis-ra 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-selector 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-server 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-service-extensions 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-activemq-artemis-tools 2.9.0-7.redhat_00017.1.el8eap.noarch eap7-aesh-extensions 1.8.0-1.redhat_00001.1.el8eap.noarch eap7-aesh-readline 2.0.0-1.redhat_00001.1.el8eap.noarch eap7-agroal-api 1.3.0-1.redhat_00001.1.el8eap.noarch eap7-agroal-narayana 1.3.0-1.redhat_00001.1.el8eap.noarch eap7-agroal-pool 1.3.0-1.redhat_00001.1.el8eap.noarch eap7-antlr 2.7.7-54.redhat_7.1.el8eap.noarch eap7-apache-commons-beanutils 1.9.4-1.redhat_00002.1.el8eap.noarch eap7-apache-commons-cli 1.3.1-3.redhat_2.1.el8eap.noarch eap7-apache-commons-codec 1.14.0-1.redhat_00001.1.el8eap.noarch eap7-apache-commons-collections 3.2.2-9.redhat_2.1.el8eap.noarch eap7-apache-commons-io 2.5.0-4.redhat_3.1.el8eap.noarch eap7-apache-commons-lang 3.10.0-1.redhat_00001.1.el8eap.noarch eap7-apache-commons-lang2 2.6.0-1.redhat_7.1.el8eap.noarch eap7-apache-cxf 3.3.7-1.redhat_00001.1.el8eap.noarch eap7-apache-cxf-rt 3.3.7-1.redhat_00001.1.el8eap.noarch eap7-apache-cxf-services 3.3.7-1.redhat_00001.1.el8eap.noarch eap7-apache-cxf-tools 3.3.7-1.redhat_00001.1.el8eap.noarch eap7-apache-mime4j 0.6.0-4.redhat_7.1.el8eap.noarch eap7-artemis-wildfly-integration 1.0.2-4.redhat_1.1.el8eap.noarch eap7-atinject 1.0.0-4.redhat_00002.1.el8eap.noarch eap7-avro 1.7.6-7.redhat_2.1.el8eap.noarch eap7-azure-storage 6.1.0-1.redhat_1.1.el8eap.noarch eap7-bouncycastle-mail 1.65.0-1.redhat_00001.1.el8eap.noarch eap7-bouncycastle-pkix 1.65.0-1.redhat_00001.1.el8eap.noarch eap7-bouncycastle-prov 1.65.0-1.redhat_00001.1.el8eap.noarch eap7-byte-buddy 1.9.11-1.redhat_00002.1.el8eap.noarch eap7-caffeine 2.6.2-3.redhat_1.1.el8eap.noarch eap7-cal10n 0.8.1-6.redhat_1.1.el8eap.noarch eap7-codehaus-jackson-core-asl 1.9.13-10.redhat_00007.1.el8eap.noarch eap7-codehaus-jackson-jaxrs 1.9.13-10.redhat_00007.1.el8eap.noarch eap7-codehaus-jackson-mapper-asl 1.9.13-10.redhat_00007.1.el8eap.noarch eap7-codehaus-jackson-xc 1.9.13-10.redhat_00007.1.el8eap.noarch eap7-codemodel 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-commons-logging-jboss-logging 1.0.0-1.Final_redhat_1.1.el8eap.noarch eap7-cryptacular 1.2.4-1.redhat_00001.1.el8eap.noarch eap7-cxf-xjc-boolean 3.3.0-1.redhat_00001.1.el8eap.noarch eap7-cxf-xjc-bug986 3.3.0-1.redhat_00001.1.el8eap.noarch eap7-cxf-xjc-dv 3.3.0-1.redhat_00001.1.el8eap.noarch eap7-cxf-xjc-runtime 3.3.0-1.redhat_00001.1.el8eap.noarch eap7-cxf-xjc-ts 3.3.0-1.redhat_00001.1.el8eap.noarch eap7-dom4j 2.1.3-1.redhat_00001.1.el8eap.noarch eap7-ecj 4.6.1-3.redhat_1.1.el8eap.noarch eap7-eclipse-jgit 5.0.2.201807311906-2.r_redhat_00001.1.el8eap.noarch eap7-fge-btf 1.2.0-1.redhat_00007.1.el8eap.noarch eap7-fge-msg-simple 1.1.0-1.redhat_00007.1.el8eap.noarch eap7-glassfish-concurrent 1.0.0-4.redhat_1.1.el8eap.noarch eap7-glassfish-jaf 1.2.1-1.redhat_00002.1.el8eap.noarch eap7-glassfish-javamail 1.6.4-2.redhat_00001.1.el8eap.noarch eap7-glassfish-jsf 2.3.9-12.SP13_redhat_00001.1.el8eap.noarch eap7-glassfish-json 1.1.6-2.redhat_00001.1.el8eap.noarch eap7-gnu-getopt 1.0.13-6.redhat_5.1.el8eap.noarch eap7-gson 2.8.2-1.redhat_5.1.el8eap.noarch eap7-guava 25.0.0-2.redhat_1.1.el8eap.noarch eap7-h2database 1.4.193-6.redhat_2.1.el8eap.noarch eap7-hal-console 3.2.12-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-beanvalidation-api 2.0.2-1.redhat_00001.1.el8eap.noarch eap7-hibernate-commons-annotations 5.0.5-1.Final_redhat_00002.1.el8eap.noarch eap7-hibernate-core 5.3.20-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-entitymanager 5.3.20-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-envers 5.3.20-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-search-backend-jms 5.10.7-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-search-engine 5.10.7-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-search-orm 5.10.7-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-search-serialization-avro 5.10.7-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-validator 6.0.21-1.Final_redhat_00001.1.el8eap.noarch eap7-hibernate-validator-cdi 6.0.21-1.Final_redhat_00001.1.el8eap.noarch eap7-hornetq-commons 2.4.7-7.Final_redhat_2.1.el8eap.noarch eap7-hornetq-core-client 2.4.7-7.Final_redhat_2.1.el8eap.noarch eap7-hornetq-jms-client 2.4.7-7.Final_redhat_2.1.el8eap.noarch eap7-httpcomponents-asyncclient 4.1.4-1.redhat_00001.1.el8eap.noarch eap7-httpcomponents-client 4.5.13-1.redhat_00001.1.el8eap.noarch eap7-httpcomponents-core 4.4.13-1.redhat_00001.1.el8eap.noarch eap7-infinispan-cachestore-jdbc 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-infinispan-cachestore-remote 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-infinispan-client-hotrod 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-infinispan-commons 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-infinispan-core 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-infinispan-hibernate-cache-commons 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-infinispan-hibernate-cache-spi 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-infinispan-hibernate-cache-v53 9.4.19-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-common-api 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-common-impl 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-common-spi 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-core-api 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-core-impl 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-deployers-common 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-jdbc 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-ironjacamar-validator 1.4.22-1.Final_redhat_00001.1.el8eap.noarch eap7-istack-commons-runtime 3.0.10-1.redhat_00001.1.el8eap.noarch eap7-istack-commons-tools 3.0.10-1.redhat_00001.1.el8eap.noarch eap7-jackson-annotations 2.10.4-1.redhat_00002.1.el8eap.noarch eap7-jackson-core 2.10.4-1.redhat_00002.1.el8eap.noarch eap7-jackson-coreutils 1.6.0-1.redhat_00006.1.el8eap.noarch eap7-jackson-databind 2.10.4-1.redhat_00002.1.el8eap.noarch eap7-jackson-datatype-jdk8 2.10.4-1.redhat_00002.1.el8eap.noarch eap7-jackson-datatype-jsr310 2.10.4-1.redhat_00002.1.el8eap.noarch eap7-jackson-jaxrs-base 2.10.4-1.redhat_00002.1.el8eap.noarch eap7-jackson-jaxrs-json-provider 2.10.4-1.redhat_00002.1.el8eap.noarch eap7-jackson-module-jaxb-annotations 2.10.4-3.redhat_00002.1.el8eap.noarch eap7-jaegertracing-jaeger-client-java-core 0.34.3-1.redhat_00001.1.el8eap.noarch eap7-jaegertracing-jaeger-client-java-thrift 0.34.3-1.redhat_00001.1.el8eap.noarch eap7-jakarta-el 3.0.3-1.redhat_00002.1.el8eap.noarch eap7-jakarta-security-enterprise-api 1.0.2-3.redhat_00001.1.el8eap.noarch eap7-jandex 2.1.2-1.Final_redhat_00001.1.el8eap.noarch eap7-jansi 1.18.0-1.redhat_00001.1.el8eap.noarch eap7-jasypt 1.9.3-1.redhat_00002.1.el8eap.noarch eap7-java-classmate 1.3.4-1.redhat_1.1.el8eap.noarch eap7-javaee-jpa-spec 2.2.3-1.redhat_00001.1.el8eap.noarch eap7-javaee-security-api 1.0.0-2.redhat_1.1.el8eap.noarch eap7-javaee-security-soteria-enterprise 1.0.1-3.redhat_00002.1.el8eap.noarch eap7-javaewah 1.1.6-1.redhat_00001.1.el8eap.noarch eap7-javapackages-tools 3.4.1-5.15.6.el8eap.noarch eap7-javassist 3.23.2-2.GA_redhat_00001.1.el8eap.noarch eap7-jaxb-jxc 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-jaxb-runtime 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-jaxb-xjc 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-jaxbintros 1.0.3-1.GA_redhat_00001.1.el8eap.noarch eap7-jaxen 1.1.6-14.redhat_2.1.el8eap.noarch eap7-jberet-core 1.3.7-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-aesh 2.4.0-1.redhat_00001.1.el8eap.noarch eap7-jboss-annotations-api_1.3_spec 2.0.1-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-batch-api_1.0_spec 2.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-classfilewriter 1.2.4-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-common-beans 2.0.1-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-concurrency-api_1.0_spec 2.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-connector-api_1.7_spec 2.0.0-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-dmr 1.5.0-2.Final_redhat_1.1.el8eap.noarch eap7-jboss-ejb-api_3.2_spec 2.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-ejb-client 4.0.37-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-ejb3-ext-api 2.3.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-el-api_3.0_spec 2.0.0-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-genericjms 2.0.8-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-iiop-client 1.0.1-3.Final_redhat_1.1.el8eap.noarch eap7-jboss-interceptors-api_1.2_spec 2.0.0-3.Final_redhat_00002.1.el8eap.noarch eap7-jboss-invocation 1.5.3-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-j2eemgmt-api_1.1_spec 2.0.0-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-jacc-api_1.5_spec 2.0.0-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-jaspi-api_1.1_spec 2.0.1-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-jaxb-api_2.3_spec 1.0.1-1.Final_redhat_1.1.el8eap.noarch eap7-jboss-jaxrpc-api_1.1_spec 2.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-jaxrs-api_2.1_spec 2.0.1-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-jaxws-api_2.3_spec 1.0.0-1.Final_redhat_1.1.el8eap.noarch eap7-jboss-jms-api_2.0_spec 2.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-jsf-api_2.3_spec 3.0.0-4.SP04_redhat_00001.1.el8eap.noarch eap7-jboss-jsp-api_2.3_spec 2.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-logging 3.4.1-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-logmanager 2.1.17-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-marshalling 2.0.10-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-marshalling-river 2.0.10-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-metadata-appclient 13.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-metadata-common 13.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-metadata-ear 13.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-metadata-ejb 13.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-metadata-web 13.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-modules 1.11.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-msc 1.4.11-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-openjdk-orb 8.1.4-3.Final_redhat_00002.1.el8eap.noarch eap7-jboss-remoting 5.0.20-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-remoting-jmx 3.0.4-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-saaj-api_1.3_spec 1.0.6-1.Final_redhat_1.1.el8eap.noarch eap7-jboss-saaj-api_1.4_spec 1.0.1-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-seam-int 7.0.0-6.GA_redhat_2.1.el8eap.noarch eap7-jboss-security-negotiation 3.0.6-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-security-xacml 2.0.8-17.Final_redhat_8.1.el8eap.noarch eap7-jboss-server-migration 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-cli 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-core 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-eap6.4 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-eap6.4-to-eap7.3 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-eap7.0 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-eap7.1 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-eap7.2 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-eap7.2-to-eap7.3 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-eap7.3-server 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly10.0 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly10.1 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly11.0 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly12.0 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly13.0-server 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly14.0-server 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly15.0-server 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly16.0-server 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly17.0-server 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly18.0-server 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly8.2 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-server-migration-wildfly9.0 1.7.2-4.Final_redhat_00005.1.el8eap.noarch eap7-jboss-servlet-api_4.0_spec 2.0.0-2.Final_redhat_00001.1.el8eap.noarch eap7-jboss-stdio 1.1.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-threads 2.3.3-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-transaction-api_1.3_spec 2.0.0-3.Final_redhat_00002.1.el8eap.noarch eap7-jboss-transaction-spi 7.6.0-2.Final_redhat_1.1.el8eap.noarch eap7-jboss-vfs 3.2.15-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-websocket-api_1.1_spec 2.0.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jboss-weld-3.1-api-weld-api 3.1.0-6.SP2_redhat_00001.1.el8eap.noarch eap7-jboss-weld-3.1-api-weld-spi 3.1.0-6.SP2_redhat_00001.1.el8eap.noarch eap7-jboss-xnio-base 3.7.12-1.Final_redhat_00001.1.el8eap.noarch eap7-jbossws-api 1.1.2-1.Final_redhat_00001.1.el8eap.noarch eap7-jbossws-common 3.2.3-1.Final_redhat_00001.1.el8eap.noarch eap7-jbossws-common-tools 1.3.2-1.Final_redhat_00001.1.el8eap.noarch eap7-jbossws-cxf 5.3.0-1.Final_redhat_00001.1.el8eap.noarch eap7-jbossws-jaxws-undertow-httpspi 1.0.1-3.Final_redhat_1.1.el8eap.noarch eap7-jbossws-spi 3.2.3-1.Final_redhat_00001.1.el8eap.noarch eap7-jcip-annotations 1.0.0-5.redhat_8.1.el8eap.noarch eap7-jettison 1.4.0-1.redhat_00001.1.el8eap.noarch eap7-jgroups 4.1.10-1.Final_redhat_00001.1.el8eap.noarch eap7-jgroups-azure 1.2.1-1.Final_redhat_00001.1.el8eap.noarch eap7-jgroups-kubernetes 1.0.13-1.Final_redhat_00001.1.el8eap.noarch eap7-joda-time 2.9.7-2.redhat_1.1.el8eap.noarch eap7-jsch 0.1.54-7.redhat_00001.1.el8eap.noarch eap7-json-patch 1.9.0-1.redhat_00002.1.el8eap.noarch eap7-jsonb-spec 1.0.2-1.redhat_00001.1.el8eap.noarch eap7-jsoup 1.8.3-4.redhat_2.1.el8eap.noarch eap7-jul-to-slf4j-stub 1.0.1-7.Final_redhat_3.1.el8eap.noarch eap7-jzlib 1.1.1-7.redhat_00001.1.el8eap.noarch eap7-log4j-jboss-logmanager 1.2.0-1.Final_redhat_00001.1.el8eap.noarch eap7-lucene-analyzers-common 5.5.5-3.redhat_2.1.el8eap.noarch eap7-lucene-backward-codecs 5.5.5-3.redhat_2.1.el8eap.noarch eap7-lucene-core 5.5.5-3.redhat_2.1.el8eap.noarch eap7-lucene-facet 5.5.5-3.redhat_2.1.el8eap.noarch eap7-lucene-misc 5.5.5-3.redhat_2.1.el8eap.noarch eap7-lucene-queries 5.5.5-3.redhat_2.1.el8eap.noarch eap7-lucene-queryparser 5.5.5-3.redhat_2.1.el8eap.noarch eap7-microprofile-config-api 1.4.0-1.redhat_00003.1.el8eap.noarch eap7-microprofile-health 2.2.0-1.redhat_00001.1.el8eap.noarch eap7-microprofile-metrics-api 2.3.0-1.redhat_00001.1.el8eap.noarch eap7-microprofile-opentracing-api 1.3.3-1.redhat_00001.1.el8eap.noarch eap7-microprofile-rest-client-api 1.4.0-1.redhat_00004.1.el8eap.noarch eap7-mod_cluster 1.4.1-1.Final_redhat_00001.1.el8eap.noarch eap7-mustache-java-compiler 0.9.4-2.redhat_1.1.el8eap.noarch eap7-narayana-compensations 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-jbosstxbridge 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-jbossxts 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-jts-idlj 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-jts-integration 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-restat-api 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-restat-bridge 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-restat-integration 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-restat-util 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-narayana-txframework 5.9.10-1.Final_redhat_00001.1.el8eap.noarch eap7-neethi 3.1.1-1.redhat_1.1.el8eap.noarch eap7-netty-all 4.1.48-1.Final_redhat_00001.1.el8eap.noarch eap7-netty-xnio-transport 0.1.6-1.Final_redhat_00001.1.el8eap.noarch eap7-objectweb-asm 7.1.0-1.redhat_00001.1.el8eap.noarch eap7-okhttp 3.9.0-3.redhat_3.1.el8eap.noarch eap7-okio 1.13.0-2.redhat_3.1.el8eap.noarch eap7-opensaml-core 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-profile-api 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-saml-api 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-saml-impl 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-security-api 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-security-impl 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-soap-api 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-xacml-api 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-xacml-impl 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-xacml-saml-api 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-xacml-saml-impl 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-xmlsec-api 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opensaml-xmlsec-impl 3.3.1-1.redhat_00002.1.el8eap.noarch eap7-opentracing-contrib-java-concurrent 0.2.1-1.redhat_00001.1.el8eap.noarch eap7-opentracing-contrib-java-jaxrs 0.4.1-1.redhat_00006.1.el8eap.noarch eap7-opentracing-contrib-java-tracerresolver 0.1.5-1.redhat_00001.1.el8eap.noarch eap7-opentracing-contrib-java-web-servlet-filter 0.2.3-1.redhat_00001.1.el8eap.noarch eap7-opentracing-interceptors 0.0.4.1-2.redhat_00002.1.el8eap.noarch eap7-opentracing-java-api 0.31.0-1.redhat_00008.1.el8eap.noarch eap7-opentracing-java-noop 0.31.0-1.redhat_00008.1.el8eap.noarch eap7-opentracing-java-util 0.31.0-1.redhat_00008.1.el8eap.noarch eap7-picketbox 5.0.3-8.Final_redhat_00007.1.el8eap.noarch eap7-picketbox-commons 1.0.0-4.final_redhat_5.1.el8eap.noarch eap7-picketbox-infinispan 5.0.3-8.Final_redhat_00007.1.el8eap.noarch eap7-picketlink-api 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-common 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-config 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-federation 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-idm-api 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-idm-impl 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-idm-simple-schema 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-impl 2.5.5-20.SP12_redhat_00009.1.el8eap.noarch eap7-picketlink-wildfly8 2.5.5-25.SP12_redhat_00013.1.el8eap.noarch eap7-python3-javapackages 3.4.1-5.15.6.el8eap.noarch eap7-reactive-streams 1.0.2-2.redhat_1.1.el8eap.noarch eap7-reactivex-rxjava 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-relaxng-datatype 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-resteasy-atom-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-cdi 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-client 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-client-microprofile 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-crypto 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-jackson-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-jackson2-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-jaxb-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-jaxrs 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-jettison-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-jose-jwt 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-jsapi 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-json-binding-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-json-p-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-multipart-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-rxjava2 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-spring 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-validator-provider-11 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-resteasy-yaml-provider 3.11.3-1.Final_redhat_00001.1.el8eap.noarch eap7-rngom 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-runtime 1-16.el8eap.x86_64 eap7-shibboleth-java-support 7.3.0-1.redhat_00001.1.el8eap.noarch eap7-slf4j-api 1.7.22-4.redhat_2.1.el8eap.noarch eap7-slf4j-ext 1.7.22-4.redhat_2.1.el8eap.noarch eap7-slf4j-jboss-logmanager 1.0.4-1.GA_redhat_00001.1.el8eap.noarch eap7-smallrye-config 1.6.2-3.redhat_00004.1.el8eap.noarch eap7-smallrye-health 2.2.0-1.redhat_00004.1.el8eap.noarch eap7-smallrye-metrics 2.4.0-1.redhat_00004.1.el8eap.noarch eap7-smallrye-opentracing 1.3.4-1.redhat_00004.1.el8eap.noarch eap7-snakeyaml 1.26.0-1.redhat_00001.1.el8eap.noarch eap7-stax-ex 1.7.8-1.redhat_00001.1.el8eap.noarch eap7-stax2-api 4.2.0-1.redhat_00001.1.el8eap.noarch eap7-staxmapper 1.3.0-2.Final_redhat_1.1.el8eap.noarch eap7-sun-saaj-1.3-impl 1.3.16-18.SP1_redhat_6.1.el8eap.noarch eap7-sun-saaj-1.4-impl 1.4.1-1.SP1_redhat_00001.1.el8eap.noarch eap7-sun-ws-metadata-2.0-api 1.0.0-7.MR1_redhat_8.1.el8eap.noarch eap7-taglibs-standard-compat 1.2.6-2.RC1_redhat_1.1.el8eap.noarch eap7-taglibs-standard-impl 1.2.6-2.RC1_redhat_1.1.el8eap.noarch eap7-taglibs-standard-spec 1.2.6-2.RC1_redhat_1.1.el8eap.noarch eap7-thrift 0.13.0-1.redhat_00002.1.el8eap.noarch eap7-txw2 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-undertow 2.0.33-1.SP2_redhat_00001.1.el8eap.noarch eap7-undertow-jastow 2.0.8-1.Final_redhat_00001.1.el8eap.noarch eap7-undertow-js 1.0.2-2.Final_redhat_1.1.el8eap.noarch eap7-undertow-server 1.6.2-1.Final_redhat_00001.1.el8eap.noarch eap7-vdx-core 1.1.6-2.redhat_1.1.el8eap.noarch eap7-vdx-wildfly 1.1.6-2.redhat_1.1.el8eap.noarch eap7-velocity 2.2.0-1.redhat_00001.1.el8eap.noarch eap7-velocity-engine-core 2.2.0-1.redhat_00001.1.el8eap.noarch eap7-weld-cdi-2.0-api 2.0.2-2.redhat_00002.1.el8eap.noarch eap7-weld-core-impl 3.1.4-1.Final_redhat_00001.1.el8eap.noarch eap7-weld-core-jsf 3.1.4-1.Final_redhat_00001.1.el8eap.noarch eap7-weld-ejb 3.1.4-1.Final_redhat_00001.1.el8eap.noarch eap7-weld-jta 3.1.4-1.Final_redhat_00001.1.el8eap.noarch eap7-weld-probe-core 3.1.4-1.Final_redhat_00001.1.el8eap.noarch eap7-weld-web 3.1.4-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly 7.3.5-2.GA_redhat_00001.1.el8eap.noarch eap7-wildfly-client-config 1.0.1-2.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-common 1.5.2-1.Final_redhat_00002.1.el8eap.noarch eap7-wildfly-discovery-client 1.2.1-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-elytron 1.10.10-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-elytron-tool 1.10.10-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-http-client-common 1.0.24-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-http-ejb-client 1.0.24-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-http-naming-client 1.0.24-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-http-transaction-client 1.0.24-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-modules 7.3.5-2.GA_redhat_00001.1.el8eap.noarch eap7-wildfly-naming-client 1.0.13-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-openssl-java 1.0.12-1.Final_redhat_00001.1.el8eap.noarch eap7-wildfly-openssl-linux-x86_64 1.0.12-1.Final_redhat_00001.1.el8eap.x86_64 eap7-wildfly-transaction-client 1.1.13-1.Final_redhat_00001.1.el8eap.noarch eap7-woodstox-core 6.0.3-1.redhat_00001.1.el8eap.noarch eap7-ws-commons-XmlSchema 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-wsdl4j 1.6.3-13.redhat_2.1.el8eap.noarch eap7-wss4j-bindings 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-wss4j-policy 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-wss4j-ws-security-common 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-wss4j-ws-security-dom 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-wss4j-ws-security-policy-stax 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-wss4j-ws-security-stax 2.2.5-1.redhat_00001.1.el8eap.noarch eap7-xalan-j2 2.7.1-35.redhat_12.1.el8eap.noarch eap7-xerces-j2 2.12.0-2.SP03_redhat_00001.1.el8eap.noarch eap7-xml-resolver 1.2.0-7.redhat_12.1.el8eap.noarch eap7-xml-security 2.1.4-1.redhat_00001.1.el8eap.noarch eap7-xom 1.2.10-4.redhat_1.1.el8eap.noarch eap7-xsom 2.3.3-4.b02_redhat_00001.1.el8eap.noarch eap7-yasson 1.0.5-1.redhat_00001.1.el8eap.noarch ebay-cors-filter 1.0.1-4.el8ev.noarch efi-srpm-macros 3-2.el8.noarch elfutils 0.180-1.el8.x86_64 elfutils-debuginfod-client 0.180-1.el8.x86_64 elfutils-default-yama-scope 0.180-1.el8.noarch elfutils-libelf 0.180-1.el8.x86_64 elfutils-libs 0.180-1.el8.x86_64 emacs-filesystem 26.1-5.el8.noarch engine-db-query 1.6.2-1.el8ev.noarch environment-modules 4.5.2-1.el8.x86_64 ethtool 5.0-2.el8.x86_64 expat 2.2.5-4.el8.x86_64 file 5.33-16.el8.x86_64 file-libs 5.33-16.el8.x86_64 filesystem 3.8-3.el8.x86_64 findutils 4.6.0-20.el8.x86_64 firewalld 0.8.2-2.el8.noarch firewalld-filesystem 0.8.2-2.el8.noarch fontconfig 2.13.1-3.el8.x86_64 fontpackages-filesystem 1.44-22.el8.noarch freetype 2.9.1-4.el8_3.1.x86_64 fribidi 1.0.4-8.el8.x86_64 fuse-libs 2.9.7-12.el8.x86_64 gawk 4.2.1-1.el8.x86_64 gc 7.6.4-3.el8.x86_64 gd 2.2.5-7.el8.x86_64 gdb-headless 8.2-12.el8.x86_64 gdbm 1.18-1.el8.x86_64 gdbm-libs 1.18-1.el8.x86_64 gdk-pixbuf2 2.36.12-5.el8.x86_64 gdk-pixbuf2-modules 2.36.12-5.el8.x86_64 geolite2-city 20180605-1.el8.noarch geolite2-country 20180605-1.el8.noarch gettext 0.19.8.1-17.el8.x86_64 gettext-libs 0.19.8.1-17.el8.x86_64 ghc-srpm-macros 1.4.2-7.el8.noarch giflib 5.1.4-3.el8.x86_64 git-core 2.27.0-1.el8.x86_64 glassfish-fastinfoset 1.2.13-9.module+el8.1.0+3366+6dfb954c.noarch glassfish-jaxb-api 2.2.12-8.module+el8.1.0+3366+6dfb954c.noarch glassfish-jaxb-core 2.2.11-11.module+el8.1.0+3366+6dfb954c.noarch glassfish-jaxb-runtime 2.2.11-11.module+el8.1.0+3366+6dfb954c.noarch glassfish-jaxb-txw2 2.2.11-11.module+el8.1.0+3366+6dfb954c.noarch glib-networking 2.56.1-1.1.el8.x86_64 glib2 2.56.4-8.el8.x86_64 glibc 2.28-127.el8_3.2.x86_64 glibc-common 2.28-127.el8_3.2.x86_64 glibc-langpack-en 2.28-127.el8_3.2.x86_64 gmp 6.1.2-10.el8.x86_64 gnupg2 2.2.20-2.el8.x86_64 gnupg2-smime 2.2.20-2.el8.x86_64 gnutls 3.6.14-7.el8_3.x86_64 gnutls-dane 3.6.14-7.el8_3.x86_64 gnutls-utils 3.6.14-7.el8_3.x86_64 go-srpm-macros 2-16.el8.noarch gobject-introspection 1.56.1-1.el8.x86_64 google-droid-sans-fonts 20120715-13.el8.noarch gpgme 1.13.1-3.el8.x86_64 grafana 6.7.4-3.el8.x86_64 grafana-postgres 6.7.4-3.el8.x86_64 graphite2 1.3.10-10.el8.x86_64 graphviz 2.40.1-40.el8.x86_64 grep 3.1-6.el8.x86_64 groff-base 1.22.3-18.el8.x86_64 grub2-common 2.02-90.el8_3.1.noarch grub2-pc 2.02-90.el8_3.1.x86_64 grub2-pc-modules 2.02-90.el8_3.1.noarch grub2-tools 2.02-90.el8_3.1.x86_64 grub2-tools-extra 2.02-90.el8_3.1.x86_64 grub2-tools-minimal 2.02-90.el8_3.1.x86_64 grubby 8.40-41.el8.x86_64 gsettings-desktop-schemas 3.32.0-5.el8.x86_64 gssproxy 0.8.0-16.el8.x86_64 gtk-update-icon-cache 3.22.30-6.el8.x86_64 gtk2 2.24.32-4.el8.x86_64 guile 2.0.14-7.el8.x86_64 gzip 1.9-9.el8.x86_64 hardlink 1.3-6.el8.x86_64 harfbuzz 1.7.5-3.el8.x86_64 hdparm 9.54-2.el8.x86_64 hicolor-icon-theme 0.17-2.el8.noarch hostname 3.20-6.el8.x86_64 httpcomponents-client 4.5.5-4.module+el8+2598+06babf2e.noarch httpcomponents-core 4.4.10-3.module+el8+2598+06babf2e.noarch httpd 2.4.37-30.module+el8.3.0+7001+0766b9e7.x86_64 httpd-filesystem 2.4.37-30.module+el8.3.0+7001+0766b9e7.noarch httpd-tools 2.4.37-30.module+el8.3.0+7001+0766b9e7.x86_64 hwdata 0.314-8.6.el8.noarch ima-evm-utils 1.1-5.el8.x86_64 info 6.5-6.el8.x86_64 initscripts 10.00.9-1.el8.x86_64 insights-client 3.1.1-1.el8_3.noarch ipcalc 0.2.4-4.el8.x86_64 iproute 5.3.0-5.el8.x86_64 iprutils 2.4.19-1.el8.x86_64 ipset 7.1-1.el8.x86_64 ipset-libs 7.1-1.el8.x86_64 iptables 1.8.4-15.el8_3.3.x86_64 iptables-ebtables 1.8.4-15.el8_3.3.x86_64 iptables-libs 1.8.4-15.el8_3.3.x86_64 iputils 20180629-2.el8.x86_64 irqbalance 1.4.0-4.el8.x86_64 istack-commons-runtime 2.21-9.el8+7.noarch iwl100-firmware 39.31.5.1-101.el8_3.1.noarch iwl1000-firmware 39.31.5.1-101.el8_3.1.noarch iwl105-firmware 18.168.6.1-101.el8_3.1.noarch iwl135-firmware 18.168.6.1-101.el8_3.1.noarch iwl2000-firmware 18.168.6.1-101.el8_3.1.noarch iwl2030-firmware 18.168.6.1-101.el8_3.1.noarch iwl3160-firmware 25.30.13.0-101.el8_3.1.noarch iwl3945-firmware 15.32.2.9-101.el8_3.1.noarch iwl4965-firmware 228.61.2.24-101.el8_3.1.noarch iwl5000-firmware 8.83.5.1_1-101.el8_3.1.noarch iwl5150-firmware 8.24.2.2-101.el8_3.1.noarch iwl6000-firmware 9.221.4.1-101.el8_3.1.noarch iwl6000g2a-firmware 18.168.6.1-101.el8_3.1.noarch iwl6050-firmware 41.28.5.1-101.el8_3.1.noarch iwl7260-firmware 25.30.13.0-101.el8_3.1.noarch jackson-annotations 2.10.0-1.module+el8.2.0+5059+3eb3af25.noarch jackson-core 2.10.0-1.module+el8.2.0+5059+3eb3af25.noarch jackson-databind 2.10.0-1.module+el8.2.0+5059+3eb3af25.noarch jackson-jaxrs-json-provider 2.9.9-1.module+el8.1.0+3832+9784644d.noarch jackson-jaxrs-providers 2.9.9-1.module+el8.1.0+3832+9784644d.noarch jackson-module-jaxb-annotations 2.7.6-4.module+el8.1.0+3366+6dfb954c.noarch jansson 2.11-3.el8.x86_64 jasper-libs 2.0.14-4.el8.x86_64 java-1.8.0-openjdk 1.8.0.282.b08-2.el8_3.x86_64 java-1.8.0-openjdk-headless 1.8.0.282.b08-2.el8_3.x86_64 java-11-openjdk-headless 11.0.10.0.9-4.el8_3.x86_64 java-client-kubevirt 0.5.0-1.el8ev.noarch javapackages-filesystem 5.3.0-2.module+el8+2598+06babf2e.noarch javapackages-tools 5.3.0-2.module+el8+2598+06babf2e.noarch jbig2dec-libs 0.14-4.el8_2.x86_64 jbigkit-libs 2.1-14.el8.x86_64 jboss-annotations-1.2-api 1.0.0-4.el8.noarch jboss-jaxrs-2.0-api 1.0.0-6.el8.noarch jboss-logging 3.3.0-5.el8.noarch jboss-logging-tools 2.0.1-6.el8.noarch jcl-over-slf4j 1.7.25-4.module+el8.1.0+3366+6dfb954c.noarch jdeparser 2.0.0-5.el8.noarch jq 1.5-12.el8.x86_64 json-c 0.13.1-0.2.el8.x86_64 json-glib 1.4.4-1.el8.x86_64 kbd 2.0.4-10.el8.x86_64 kbd-legacy 2.0.4-10.el8.noarch kbd-misc 2.0.4-10.el8.noarch kernel 4.18.0-240.15.1.el8_3.x86_64 kernel-core 4.18.0-240.15.1.el8_3.x86_64 kernel-modules 4.18.0-240.15.1.el8_3.x86_64 kernel-tools 4.18.0-240.15.1.el8_3.x86_64 kernel-tools-libs 4.18.0-240.15.1.el8_3.x86_64 kexec-tools 2.0.20-34.el8_3.2.x86_64 keyutils 1.5.10-6.el8.x86_64 keyutils-libs 1.5.10-6.el8.x86_64 kmod 25-16.el8_3.1.x86_64 kmod-libs 25-16.el8_3.1.x86_64 kpartx 0.8.4-5.el8.x86_64 krb5-libs 1.18.2-5.el8.x86_64 langpacks-en 1.0-12.el8.noarch lcms2 2.9-2.el8.x86_64 less 530-1.el8.x86_64 libICE 1.0.9-15.el8.x86_64 libSM 1.2.3-1.el8.x86_64 libX11 1.6.8-3.el8.x86_64 libX11-common 1.6.8-3.el8.noarch libXau 1.0.9-3.el8.x86_64 libXaw 1.0.13-10.el8.x86_64 libXcomposite 0.4.4-14.el8.x86_64 libXcursor 1.1.15-3.el8.x86_64 libXdamage 1.1.4-14.el8.x86_64 libXext 1.3.4-1.el8.x86_64 libXfixes 5.0.3-7.el8.x86_64 libXft 2.3.3-1.el8.x86_64 libXi 1.7.10-1.el8.x86_64 libXinerama 1.1.4-1.el8.x86_64 libXmu 1.1.3-1.el8.x86_64 libXpm 3.5.12-8.el8.x86_64 libXrandr 1.5.2-1.el8.x86_64 libXrender 0.9.10-7.el8.x86_64 libXt 1.1.5-12.el8.x86_64 libXtst 1.2.3-7.el8.x86_64 libXxf86misc 1.0.4-1.el8.x86_64 libXxf86vm 1.1.4-9.el8.x86_64 libacl 2.2.53-1.el8.x86_64 libaio 0.3.112-1.el8.x86_64 libappstream-glib 0.7.14-3.el8.x86_64 libarchive 3.3.2-9.el8.x86_64 libassuan 2.5.1-3.el8.x86_64 libatomic_ops 7.6.2-3.el8.x86_64 libattr 2.4.48-3.el8.x86_64 libbabeltrace 1.5.4-3.el8.x86_64 libbasicobjects 0.1.1-39.el8.x86_64 libblkid 2.32.1-24.el8.x86_64 libcap 2.26-4.el8.x86_64 libcap-ng 0.7.9-5.el8.x86_64 libcollection 0.7.0-39.el8.x86_64 libcom_err 1.45.6-1.el8.x86_64 libcomps 0.1.11-4.el8.x86_64 libcroco 0.6.12-4.el8_2.1.x86_64 libcurl 7.61.1-14.el8_3.1.x86_64 libdaemon 0.14-15.el8.x86_64 libdatrie 0.2.9-7.el8.x86_64 libdb 5.3.28-39.el8.x86_64 libdb-utils 5.3.28-39.el8.x86_64 libdhash 0.5.0-39.el8.x86_64 libdnf 0.48.0-5.el8.x86_64 libedit 3.1-23.20170329cvs.el8.x86_64 libestr 0.1.10-1.el8.x86_64 libevent 2.1.8-5.el8.x86_64 libfastjson 0.99.8-2.el8.x86_64 libfdisk 2.32.1-24.el8.x86_64 libffi 3.1-22.el8.x86_64 libfontenc 1.1.3-8.el8.x86_64 libgcc 8.3.1-5.1.el8.x86_64 libgcrypt 1.8.5-4.el8.x86_64 libgfortran 8.3.1-5.1.el8.x86_64 libgomp 8.3.1-5.1.el8.x86_64 libgpg-error 1.31-1.el8.x86_64 libgs 9.25-7.el8.x86_64 libicu 60.3-2.el8_1.x86_64 libidn 1.34-5.el8.x86_64 libidn2 2.2.0-1.el8.x86_64 libijs 0.35-5.el8.x86_64 libini_config 1.3.1-39.el8.x86_64 libipt 1.6.1-8.el8.x86_64 libjpeg-turbo 1.5.3-10.el8.x86_64 libkcapi 1.2.0-2.el8.x86_64 libkcapi-hmaccalc 1.2.0-2.el8.x86_64 libksba 1.3.5-7.el8.x86_64 libldb 2.1.3-2.el8.x86_64 liblognorm 2.0.5-1.el8.x86_64 libmaxminddb 1.2.0-10.el8.x86_64 libmcpp 2.7.2-20.el8.x86_64 libmetalink 0.1.3-7.el8.x86_64 libmnl 1.0.4-6.el8.x86_64 libmodman 2.0.1-17.el8.x86_64 libmodulemd 2.9.4-2.el8.x86_64 libmount 2.32.1-24.el8.x86_64 libndp 1.7-3.el8.x86_64 libnetfilter_conntrack 1.0.6-5.el8.x86_64 libnfnetlink 1.0.1-13.el8.x86_64 libnfsidmap 2.3.3-35.el8.x86_64 libnftnl 1.1.5-4.el8.x86_64 libnghttp2 1.33.0-3.el8_2.1.x86_64 libnl3 3.5.0-1.el8.x86_64 libnl3-cli 3.5.0-1.el8.x86_64 libnsl2 1.2.0-2.20180605git4a062cf.el8.x86_64 libpaper 1.1.24-22.el8.x86_64 libpath_utils 0.2.1-39.el8.x86_64 libpcap 1.9.1-4.el8.x86_64 libpipeline 1.5.0-2.el8.x86_64 libpkgconf 1.4.2-1.el8.x86_64 libpng 1.6.34-5.el8.x86_64 libpq 12.5-1.el8_3.x86_64 libproxy 0.4.15-5.2.el8.x86_64 libpsl 0.20.2-6.el8.x86_64 libpwquality 1.4.0-9.el8.x86_64 libquadmath 8.3.1-5.1.el8.x86_64 libref_array 0.1.5-39.el8.x86_64 librepo 1.12.0-2.el8.x86_64 libreport-filesystem 2.9.5-15.el8.x86_64 librhsm 0.0.3-3.el8.x86_64 librsvg2 2.42.7-4.el8.x86_64 libseccomp 2.4.3-1.el8.x86_64 libsecret 0.18.6-1.el8.x86_64 libselinux 2.9-4.el8_3.x86_64 libselinux-utils 2.9-4.el8_3.x86_64 libsemanage 2.9-3.el8.x86_64 libsepol 2.9-1.el8.x86_64 libsigsegv 2.11-5.el8.x86_64 libsmartcols 2.32.1-24.el8.x86_64 libsodium 1.0.18-2.el8ev.x86_64 libsolv 0.7.11-1.el8.x86_64 libsoup 2.62.3-2.el8.x86_64 libss 1.45.6-1.el8.x86_64 libssh 0.9.4-2.el8.x86_64 libssh-config 0.9.4-2.el8.noarch libsss_autofs 2.2.3-20.el8.x86_64 libsss_certmap 2.2.3-20.el8.x86_64 libsss_idmap 2.2.3-20.el8.x86_64 libsss_nss_idmap 2.2.3-20.el8.x86_64 libsss_sudo 2.2.3-20.el8.x86_64 libstdc++ 8.3.1-5.1.el8.x86_64 libstemmer 0-10.585svn.el8.x86_64 libsysfs 2.1.0-24.el8.x86_64 libtalloc 2.3.1-2.el8.x86_64 libtasn1 4.13-3.el8.x86_64 libtdb 1.4.3-1.el8.x86_64 libteam 1.31-2.el8.x86_64 libtevent 0.10.2-2.el8.x86_64 libthai 0.1.27-2.el8.x86_64 libtiff 4.0.9-18.el8.x86_64 libtirpc 1.1.4-4.el8.x86_64 libtool-ltdl 2.4.6-25.el8.x86_64 libunistring 0.9.9-3.el8.x86_64 libusbx 1.0.23-4.el8.x86_64 libuser 0.62-23.el8.x86_64 libutempter 1.1.6-14.el8.x86_64 libuuid 2.32.1-24.el8.x86_64 libverto 0.3.0-5.el8.x86_64 libverto-libevent 0.3.0-5.el8.x86_64 libwebp 1.0.0-1.el8.x86_64 libxcb 1.13.1-1.el8.x86_64 libxcrypt 4.1.1-4.el8.x86_64 libxkbcommon 0.9.1-1.el8.x86_64 libxml2 2.9.7-8.el8.x86_64 libxslt 1.1.32-5.el8.x86_64 libyaml 0.1.7-5.el8.x86_64 libzstd 1.4.4-1.el8.x86_64 lksctp-tools 1.0.18-3.el8.x86_64 log4j12 1.2.17-22.el8ev.noarch logrotate 3.14.0-4.el8.x86_64 lshw B.02.19.2-2.el8.x86_64 lsscsi 0.30-1.el8.x86_64 lua 5.3.4-11.el8.x86_64 lua-libs 5.3.4-11.el8.x86_64 lvm2 2.03.09-5.el8_3.2.x86_64 lvm2-libs 2.03.09-5.el8_3.2.x86_64 lz4-libs 1.8.3-2.el8.x86_64 lzo 2.08-14.el8.x86_64 mailcap 2.1.48-3.el8.noarch man-db 2.7.6.1-17.el8.x86_64 mcpp 2.7.2-20.el8.x86_64 memstrack 0.1.11-1.el8.x86_64 microcode_ctl 20200609-2.20210216.1.el8_3.x86_64 mod_auth_gssapi 1.6.1-6.el8.x86_64 mod_auth_openidc 2.3.7-4.module+el8.2.0+6919+ac02cfd2.3.x86_64 mod_http2 1.15.7-2.module+el8.3.0+7670+8bf57d29.x86_64 mod_session 2.4.37-30.module+el8.3.0+7001+0766b9e7.x86_64 mod_ssl 2.4.37-30.module+el8.3.0+7001+0766b9e7.x86_64 mozjs60 60.9.0-4.el8.x86_64 mpfr 3.1.6-1.el8.x86_64 ncurses 6.1-7.20180224.el8.x86_64 ncurses-base 6.1-7.20180224.el8.noarch ncurses-libs 6.1-7.20180224.el8.x86_64 net-tools 2.0-0.51.20160912git.el8.x86_64 nettle 3.4.1-2.el8.x86_64 newt 0.52.20-11.el8.x86_64 nfs-utils 2.3.3-35.el8.x86_64 nftables 0.9.3-16.el8.x86_64 nodejs 14.16.0-2.module+el8.3.0+10180+b92e1eb6.x86_64 novnc 1.1.0-1.el8ost.noarch npth 1.5-4.el8.x86_64 nspr 4.25.0-2.el8_2.x86_64 nss 3.53.1-17.el8_3.x86_64 nss-softokn 3.53.1-17.el8_3.x86_64 nss-softokn-freebl 3.53.1-17.el8_3.x86_64 nss-sysinit 3.53.1-17.el8_3.x86_64 nss-util 3.53.1-17.el8_3.x86_64 numactl-libs 2.0.12-11.el8.x86_64 ocaml-srpm-macros 5-4.el8.noarch oddjob 0.34.5-3.el8.x86_64 oddjob-mkhomedir 0.34.5-3.el8.x86_64 ongres-scram 1.0.0~beta.2-5.el8.noarch ongres-scram-client 1.0.0~beta.2-5.el8.noarch oniguruma 6.8.2-2.el8.x86_64 openblas 0.3.3-5.el8.x86_64 openblas-srpm-macros 2-2.el8.noarch openblas-threads 0.3.3-5.el8.x86_64 openjpeg2 2.3.1-6.el8.x86_64 openldap 2.4.46-15.el8.x86_64 opensc 0.20.0-2.el8.x86_64 openscap 1.3.3-6.el8_3.x86_64 openscap-scanner 1.3.3-6.el8_3.x86_64 openscap-utils 1.3.3-6.el8_3.x86_64 openssh 8.0p1-5.el8.x86_64 openssh-clients 8.0p1-5.el8.x86_64 openssh-server 8.0p1-5.el8.x86_64 openssl 1.1.1g-12.el8_3.x86_64 openssl-libs 1.1.1g-12.el8_3.x86_64 openssl-pkcs11 0.4.10-2.el8.x86_64 openstack-java-cinder-client 3.2.9-1.el8ev.noarch openstack-java-cinder-model 3.2.9-1.el8ev.noarch openstack-java-client 3.2.9-1.el8ev.noarch openstack-java-glance-client 3.2.9-1.el8ev.noarch openstack-java-glance-model 3.2.9-1.el8ev.noarch openstack-java-keystone-client 3.2.9-1.el8ev.noarch openstack-java-keystone-model 3.2.9-1.el8ev.noarch openstack-java-quantum-client 3.2.9-1.el8ev.noarch openstack-java-quantum-model 3.2.9-1.el8ev.noarch openstack-java-resteasy-connector 3.2.9-1.el8ev.noarch openvswitch-selinux-extra-policy 1.0-28.el8fdp.noarch openvswitch2.11 2.11.3-83.el8fdp.x86_64 os-prober 1.74-6.el8.x86_64 otopi-common 1.9.2-1.el8ev.noarch ovirt-ansible-collection 1.2.4-1.el8ev.noarch ovirt-cockpit-sso 0.1.4-1.el8ev.noarch ovirt-engine 4.4.4.7-0.2.el8ev.noarch ovirt-engine-backend 4.4.4.7-0.2.el8ev.noarch ovirt-engine-dbscripts 4.4.4.7-0.2.el8ev.noarch ovirt-engine-dwh 4.4.4.2-1.el8ev.noarch ovirt-engine-dwh-grafana-integration-setup 4.4.4.2-1.el8ev.noarch ovirt-engine-dwh-setup 4.4.4.2-1.el8ev.noarch ovirt-engine-extension-aaa-jdbc 1.2.0-1.el8ev.noarch ovirt-engine-extension-aaa-ldap 1.4.2-1.el8ev.noarch ovirt-engine-extension-aaa-ldap-setup 1.4.2-1.el8ev.noarch ovirt-engine-extension-aaa-misc 1.1.0-1.el8ev.noarch ovirt-engine-extension-logger-log4j 1.1.1-1.el8ev.noarch ovirt-engine-extensions-api 1.0.1-1.el8ev.noarch ovirt-engine-metrics 1.4.2.2-1.el8ev.noarch ovirt-engine-restapi 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup-base 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup-plugin-cinderlib 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup-plugin-imageio 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup-plugin-ovirt-engine 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup-plugin-ovirt-engine-common 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper 4.4.4.7-0.2.el8ev.noarch ovirt-engine-setup-plugin-websocket-proxy 4.4.4.7-0.2.el8ev.noarch ovirt-engine-tools 4.4.4.7-0.2.el8ev.noarch ovirt-engine-tools-backup 4.4.4.7-0.2.el8ev.noarch ovirt-engine-ui-extensions 1.2.4-1.el8ev.noarch ovirt-engine-vmconsole-proxy-helper 4.4.4.7-0.2.el8ev.noarch ovirt-engine-webadmin-portal 4.4.4.7-0.2.el8ev.noarch ovirt-engine-websocket-proxy 4.4.4.7-0.2.el8ev.noarch ovirt-imageio-common 2.1.1-1.el8ev.x86_64 ovirt-imageio-daemon 2.1.1-1.el8ev.x86_64 ovirt-log-collector 4.4.4-1.el8ev.noarch ovirt-provider-ovn 1.2.33-1.el8ev.noarch ovirt-vmconsole 1.0.9-1.el8ev.noarch ovirt-vmconsole-proxy 1.0.9-1.el8ev.noarch ovirt-web-ui 1.6.6-1.el8ev.noarch ovn2.11 2.11.1-57.el8fdp.x86_64 ovn2.11-central 2.11.1-57.el8fdp.x86_64 p11-kit 0.23.14-5.el8_0.x86_64 p11-kit-trust 0.23.14-5.el8_0.x86_64 pam 1.3.1-11.el8.x86_64 pango 1.42.4-6.el8.x86_64 parted 3.2-38.el8.x86_64 passwd 0.80-3.el8.x86_64 patch 2.7.6-11.el8.x86_64 pciutils 3.6.4-2.el8.x86_64 pciutils-libs 3.6.4-2.el8.x86_64 pcre 8.42-4.el8.x86_64 pcre2 10.32-2.el8.x86_64 pcsc-lite 1.8.23-3.el8.x86_64 pcsc-lite-ccid 1.4.29-4.el8.x86_64 pcsc-lite-libs 1.8.23-3.el8.x86_64 perl-Carp 1.42-396.el8.noarch perl-Encode 2.97-3.el8.x86_64 perl-Errno 1.28-417.el8_3.x86_64 perl-Exporter 5.72-396.el8.noarch perl-File-Path 2.15-2.el8.noarch perl-File-Temp 0.230.600-1.el8.noarch perl-Getopt-Long 2.50-4.el8.noarch perl-HTTP-Tiny 0.074-1.el8.noarch perl-IO 1.38-417.el8_3.x86_64 perl-IO-Socket-IP 0.39-5.el8.noarch perl-MIME-Base64 3.15-396.el8.x86_64 perl-PathTools 3.74-1.el8.x86_64 perl-Pod-Escapes 1.07-395.el8.noarch perl-Pod-Perldoc 3.28-396.el8.noarch perl-Pod-Simple 3.35-395.el8.noarch perl-Pod-Usage 1.69-395.el8.noarch perl-Scalar-List-Utils 1.49-2.el8.x86_64 perl-Socket 2.027-3.el8.x86_64 perl-Storable 3.11-3.el8.x86_64 perl-Term-ANSIColor 4.06-396.el8.noarch perl-Term-Cap 1.17-395.el8.noarch perl-Text-ParseWords 3.30-395.el8.noarch perl-Text-Tabs+Wrap 2013.0523-395.el8.noarch perl-Time-Local 1.280-1.el8.noarch perl-Unicode-Normalize 1.25-396.el8.x86_64 perl-constant 1.33-396.el8.noarch perl-interpreter 5.26.3-417.el8_3.x86_64 perl-libs 5.26.3-417.el8_3.x86_64 perl-macros 5.26.3-417.el8_3.x86_64 perl-parent 0.237-1.el8.noarch perl-podlators 4.11-1.el8.noarch perl-srpm-macros 1-25.el8.noarch perl-threads 2.21-2.el8.x86_64 perl-threads-shared 1.58-2.el8.x86_64 pigz 2.4-4.el8.x86_64 pinentry 1.1.0-2.el8.x86_64 pixman 0.38.4-1.el8.x86_64 pkgconf 1.4.2-1.el8.x86_64 pkgconf-m4 1.4.2-1.el8.noarch pkgconf-pkg-config 1.4.2-1.el8.x86_64 pki-servlet-4.0-api 9.0.30-1.module+el8.3.0+6730+8f9c6254.noarch platform-python 3.6.8-31.el8.x86_64 platform-python-pip 9.0.3-18.el8.noarch platform-python-setuptools 39.2.0-6.el8.noarch policycoreutils 2.9-9.el8.x86_64 policycoreutils-python-utils 2.9-9.el8.noarch polkit 0.115-11.el8.x86_64 polkit-libs 0.115-11.el8.x86_64 polkit-pkla-compat 0.1-12.el8.x86_64 popt 1.16-14.el8.x86_64 postgresql 12.5-1.module+el8.3.0+9042+664538f4.x86_64 postgresql-contrib 12.5-1.module+el8.3.0+9042+664538f4.x86_64 postgresql-jdbc 42.2.3-3.el8_2.noarch postgresql-server 12.5-1.module+el8.3.0+9042+664538f4.x86_64 prefixdevname 0.1.0-6.el8.x86_64 procps-ng 3.3.15-3.el8.x86_64 psmisc 23.1-5.el8.x86_64 publicsuffix-list 20180723-1.el8.noarch publicsuffix-list-dafsa 20180723-1.el8.noarch python-srpm-macros 3-39.el8.noarch python3-aniso8601 0.82-4.el8ost.noarch python3-ansible-runner 1.4.6-2.el8ar.noarch python3-asn1crypto 0.24.0-3.el8.noarch python3-audit 3.0-0.17.20191104git1c2f876.el8.x86_64 python3-babel 2.5.1-5.el8.noarch python3-bcrypt 3.1.6-2.el8ev.x86_64 python3-bind 9.11.20-5.el8_3.1.noarch python3-cairo 1.16.3-6.el8.x86_64 python3-cffi 1.11.5-5.el8.x86_64 python3-chardet 3.0.4-7.el8.noarch python3-click 6.7-8.el8.noarch python3-configobj 5.0.6-11.el8.noarch python3-cryptography 2.3-3.el8.x86_64 python3-daemon 2.1.2-9.el8ar.noarch python3-dateutil 2.6.1-6.el8.noarch python3-dbus 1.2.4-15.el8.x86_64 python3-decorator 4.2.1-2.el8.noarch python3-dmidecode 3.12.2-15.el8.x86_64 python3-dnf 4.2.23-4.el8.noarch python3-dnf-plugin-versionlock 4.0.17-5.el8.noarch python3-dnf-plugins-core 4.0.17-5.el8.noarch python3-docutils 0.14-12.module+el8.1.0+3334+5cb623d7.noarch python3-ethtool 0.14-3.el8.x86_64 python3-firewall 0.8.2-2.el8.noarch python3-flask 1.0.2-2.el8ost.noarch python3-flask-restful 0.3.6-8.el8ost.noarch python3-gobject 3.28.3-2.el8.x86_64 python3-gobject-base 3.28.3-2.el8.x86_64 python3-gpg 1.13.1-3.el8.x86_64 python3-hawkey 0.48.0-5.el8.x86_64 python3-idna 2.5-5.el8.noarch python3-iniparse 0.4-31.el8.noarch python3-inotify 0.9.6-13.el8.noarch python3-itsdangerous 0.24-14.el8.noarch python3-jinja2 2.10.1-2.el8_0.noarch python3-jmespath 0.9.0-11.el8.noarch python3-jsonpatch 1.21-2.el8.noarch python3-jsonpointer 1.10-11.el8.noarch python3-jsonschema 2.6.0-4.el8.noarch python3-jwt 1.6.1-2.el8.noarch python3-ldap 3.1.0-5.el8.x86_64 python3-libcomps 0.1.11-4.el8.x86_64 python3-libdnf 0.48.0-5.el8.x86_64 python3-librepo 1.12.0-2.el8.x86_64 python3-libs 3.6.8-31.el8.x86_64 python3-libselinux 2.9-4.el8_3.x86_64 python3-libsemanage 2.9-3.el8.x86_64 python3-libxml2 2.9.7-8.el8.x86_64 python3-linux-procfs 0.6.2-2.el8.noarch python3-lockfile 0.11.0-8.el8ar.noarch python3-lxml 4.2.3-1.el8.x86_64 python3-m2crypto 0.35.2-5.el8ev.x86_64 python3-magic 5.33-16.el8.noarch python3-markupsafe 0.23-19.el8.x86_64 python3-mod_wsgi 4.6.4-4.el8.x86_64 python3-netaddr 0.7.19-8.1.el8ost.noarch python3-nftables 0.9.3-16.el8.x86_64 python3-notario 0.0.16-2.el8cp.noarch python3-numpy 1.14.3-9.el8.x86_64 python3-oauthlib 2.1.0-1.el8.noarch python3-openvswitch2.11 2.11.3-83.el8fdp.x86_64 python3-otopi 1.9.2-1.el8ev.noarch python3-ovirt-engine-lib 4.4.4.7-0.2.el8ev.noarch python3-ovirt-engine-sdk4 4.4.7-1.el8ev.x86_64 python3-ovirt-setup-lib 1.3.2-1.el8ev.noarch python3-ovsdbapp 0.17.1-0.20191216120142.206cf14.el8ost.noarch python3-paramiko 2.4.3-2.el8ev.noarch python3-passlib 1.7.0-5.el8ost.noarch python3-pbr 5.1.2-2.el8ost.noarch python3-perf 4.18.0-240.15.1.el8_3.x86_64 python3-pexpect 4.6-2.el8ost.noarch python3-pip 9.0.3-18.el8.noarch python3-pip-wheel 9.0.3-18.el8.noarch python3-ply 3.9-8.el8.noarch python3-policycoreutils 2.9-9.el8.noarch python3-prettytable 0.7.2-14.el8.noarch python3-psutil 5.4.3-10.el8.x86_64 python3-psycopg2 2.7.5-7.el8.x86_64 python3-ptyprocess 0.5.2-4.el8.noarch python3-pwquality 1.4.0-9.el8.x86_64 python3-pyOpenSSL 18.0.0-1.el8.noarch python3-pyasn1 0.3.7-6.el8.noarch python3-pyasn1-modules 0.3.7-6.el8.noarch python3-pycparser 2.14-14.el8.noarch python3-pycurl 7.43.0.2-4.el8.x86_64 python3-pydbus 0.6.0-5.el8.noarch python3-pynacl 1.3.0-5.el8ev.x86_64 python3-pyserial 3.1.1-8.el8.noarch python3-pysocks 1.6.8-3.el8.noarch python3-pytz 2017.2-9.el8.noarch python3-pyudev 0.21.0-7.el8.noarch python3-pyyaml 3.12-12.el8.x86_64 python3-requests 2.20.0-2.1.el8_1.noarch python3-rpm 4.14.3-4.el8.x86_64 python3-rpm-macros 3-39.el8.noarch python3-schedutils 0.6-6.el8.x86_64 python3-setools 4.3.0-2.el8.x86_64 python3-setuptools 39.2.0-6.el8.noarch python3-setuptools-wheel 39.2.0-6.el8.noarch python3-six 1.12.0-1.el8ost.noarch python3-slip 0.6.4-11.el8.noarch python3-slip-dbus 0.6.4-11.el8.noarch python3-subscription-manager-rhsm 1.27.18-1.el8_3.x86_64 python3-syspurpose 1.27.18-1.el8_3.x86_64 python3-systemd 234-8.el8.x86_64 python3-unbound 1.7.3-14.el8.x86_64 python3-urllib3 1.24.2-4.el8.noarch python3-websocket-client 0.54.0-1.el8ost.noarch python3-websockify 0.8.0-12.el8ev.noarch python3-werkzeug 0.16.0-1.el8ost.noarch python36 3.6.8-2.module+el8.1.0+3334+5cb623d7.x86_64 qemu-guest-agent 4.2.0-34.module+el8.3.0+9903+ca3e42fb.4.x86_64 qt5-srpm-macros 5.12.5-3.el8.noarch quota 4.04-10.el8.x86_64 quota-nls 4.04-10.el8.noarch readline 7.0-10.el8.x86_64 redhat-logos 81.1-1.el8.x86_64 redhat-logos-httpd 81.1-1.el8.noarch redhat-release 8.3-1.0.el8.x86_64 redhat-release-eula 8.3-1.0.el8.x86_64 redhat-rpm-config 123-1.el8.noarch relaxngDatatype 2011.1-7.module+el8.1.0+3366+6dfb954c.noarch resteasy 3.0.26-3.module+el8.2.0+5723+4574fbff.noarch rhel-system-roles 1.0-21.el8.noarch rhsm-icons 1.27.18-1.el8_3.noarch rhv-log-collector-analyzer 1.0.6-1.el8ev.noarch rhv-openvswitch 2.11-7.el8ev.noarch rhv-openvswitch-ovn-central 2.11-7.el8ev.noarch rhv-openvswitch-ovn-common 2.11-7.el8ev.noarch rhv-python-openvswitch 2.11-7.el8ev.noarch rhvm 4.4.4.7-0.2.el8ev.noarch rhvm-branding-rhv 4.4.7-1.el8ev.noarch rhvm-dependencies 4.4.1-1.el8ev.noarch rhvm-setup-plugins 4.4.2-1.el8ev.noarch rng-tools 6.8-3.el8.x86_64 rootfiles 8.1-22.el8.noarch rpcbind 1.2.5-7.el8.x86_64 rpm 4.14.3-4.el8.x86_64 rpm-build 4.14.3-4.el8.x86_64 rpm-build-libs 4.14.3-4.el8.x86_64 rpm-libs 4.14.3-4.el8.x86_64 rpm-plugin-selinux 4.14.3-4.el8.x86_64 rpm-plugin-systemd-inhibit 4.14.3-4.el8.x86_64 rpmdevtools 8.10-8.el8.noarch rsync 3.1.3-9.el8.x86_64 rsyslog 8.1911.0-6.el8.x86_64 rsyslog-elasticsearch 8.1911.0-6.el8.x86_64 rsyslog-mmjsonparse 8.1911.0-6.el8.x86_64 rsyslog-mmnormalize 8.1911.0-6.el8.x86_64 rust-srpm-macros 5-2.el8.noarch scap-security-guide 0.1.48-1.el8ev.noarch scl-utils 2.0.2-12.el8.x86_64 sed 4.5-2.el8.x86_64 selinux-policy 3.14.3-54.el8_3.2.noarch selinux-policy-targeted 3.14.3-54.el8_3.2.noarch setroubleshoot-plugins 3.3.13-1.el8.noarch setroubleshoot-server 3.3.24-1.el8.x86_64 setup 2.12.2-6.el8.noarch sg3_utils 1.44-5.el8.x86_64 sg3_utils-libs 1.44-5.el8.x86_64 sgml-common 0.6.3-50.el8.noarch shadow-utils 4.6-11.el8.x86_64 shared-mime-info 1.9-3.el8.x86_64 slang 2.3.2-3.el8.x86_64 slf4j 1.7.25-4.module+el8.1.0+3366+6dfb954c.noarch slf4j-jdk14 1.7.25-4.module+el8.1.0+3366+6dfb954c.noarch snappy 1.1.8-3.el8.x86_64 snmp4j 2.4.1-1.el8ev.noarch sos 3.9.1-6.el8.noarch source-highlight 3.1.8-16.el8.x86_64 spice-client-win-x64 8.3-2.el8.noarch spice-client-win-x86 8.3-2.el8.noarch sqlite 3.26.0-11.el8.x86_64 sqlite-libs 3.26.0-11.el8.x86_64 squashfs-tools 4.3-19.el8.x86_64 sscg 2.3.3-14.el8.x86_64 sshpass 1.06-3.el8ae.x86_64 sssd-client 2.2.3-20.el8.x86_64 sssd-common 2.2.3-20.el8.x86_64 sssd-kcm 2.2.3-20.el8.x86_64 sssd-nfs-idmap 2.2.3-20.el8.x86_64 stax-ex 1.7.7-8.module+el8.2.0+5723+4574fbff.noarch subscription-manager 1.27.18-1.el8_3.x86_64 subscription-manager-cockpit 1.27.18-1.el8_3.noarch subscription-manager-rhsm-certificates 1.27.18-1.el8_3.x86_64 sudo 1.8.29-6.el8_3.1.x86_64 systemd 239-41.el8_3.1.x86_64 systemd-libs 239-41.el8_3.1.x86_64 systemd-pam 239-41.el8_3.1.x86_64 systemd-udev 239-41.el8_3.1.x86_64 tar 1.30-5.el8.x86_64 tcl 8.6.8-2.el8.x86_64 tcpdump 4.9.3-1.el8.x86_64 teamd 1.31-2.el8.x86_64 timedatex 0.5-3.el8.x86_64 tmux 2.7-1.el8.x86_64 trousers 0.3.14-4.el8.x86_64 trousers-lib 0.3.14-4.el8.x86_64 ttmkfdir 3.0.9-54.el8.x86_64 tuned 2.14.0-3.el8_3.2.noarch tzdata 2021a-1.el8.noarch tzdata-java 2021a-1.el8.noarch unbound-libs 1.7.3-14.el8.x86_64 unboundid-ldapsdk 4.0.14-1.el8ev.noarch unzip 6.0-43.el8.x86_64 urw-base35-bookman-fonts 20170801-10.el8.noarch urw-base35-c059-fonts 20170801-10.el8.noarch urw-base35-d050000l-fonts 20170801-10.el8.noarch urw-base35-fonts 20170801-10.el8.noarch urw-base35-fonts-common 20170801-10.el8.noarch urw-base35-gothic-fonts 20170801-10.el8.noarch urw-base35-nimbus-mono-ps-fonts 20170801-10.el8.noarch urw-base35-nimbus-roman-fonts 20170801-10.el8.noarch urw-base35-nimbus-sans-fonts 20170801-10.el8.noarch urw-base35-p052-fonts 20170801-10.el8.noarch urw-base35-standard-symbols-ps-fonts 20170801-10.el8.noarch urw-base35-z003-fonts 20170801-10.el8.noarch usermode 1.113-1.el8.x86_64 util-linux 2.32.1-24.el8.x86_64 uuid 1.6.2-42.el8.x86_64 vdsm-jsonrpc-java 1.6.0-1.el8ev.noarch vim-filesystem 8.0.1763-13.el8.noarch vim-minimal 8.0.1763-13.el8.x86_64 virt-what 1.18-6.el8.x86_64 which 2.21-12.el8.x86_64 ws-commons-util 1.0.2-1.el8ev.noarch xfsprogs 5.0.0-4.el8.x86_64 xkeyboard-config 2.28-1.el8.noarch xml-common 0.6.3-50.el8.noarch xmlrpc-client 3.1.3-1.el8ev.noarch xmlrpc-common 3.1.3-1.el8ev.noarch xmlstreambuffer 1.5.4-8.module+el8.2.0+5723+4574fbff.noarch xorg-x11-font-utils 7.5-40.el8.x86_64 xorg-x11-fonts-ISO8859-1-100dpi 7.5-19.el8.noarch xorg-x11-fonts-Type1 7.5-19.el8.noarch xorg-x11-server-utils 7.7-27.el8.x86_64 xsom 0-19.20110809svn.module+el8.1.0+3366+6dfb954c.noarch xz 5.2.4-3.el8.x86_64 xz-libs 5.2.4-3.el8.x86_64 yajl 2.1.0-10.el8.x86_64 yum 4.2.23-4.el8.noarch yum-utils 4.0.17-5.el8.noarch zip 3.0-23.el8.x86_64 zlib 1.2.11-16.2.el8_2.x86_64 zstd 1.4.4-1.el8.x86_64 5.2. Red Hat Virtualization Host 4.4 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the Red Hat Virtualization Host 4.4.4 image. Table 5.2. Red Hat Virtualization Host for RHEL 8 x86_64 (RPMs) Name Version GConf2 3.2.6-22.el8.x86_64 NetworkManager 1.26.0-12.el8_3.x86_64 NetworkManager-config-server 1.26.0-12.el8_3.noarch NetworkManager-libnm 1.26.0-12.el8_3.x86_64 NetworkManager-ovs 1.26.0-12.el8_3.x86_64 NetworkManager-team 1.26.0-12.el8_3.x86_64 NetworkManager-tui 1.26.0-12.el8_3.x86_64 abattis-cantarell-fonts 0.0.25-4.el8.noarch abrt 2.10.9-20.el8.x86_64 abrt-addon-ccpp 2.10.9-20.el8.x86_64 abrt-addon-coredump-helper 2.10.9-20.el8.x86_64 abrt-addon-kerneloops 2.10.9-20.el8.x86_64 abrt-addon-pstoreoops 2.10.9-20.el8.x86_64 abrt-addon-vmcore 2.10.9-20.el8.x86_64 abrt-addon-xorg 2.10.9-20.el8.x86_64 abrt-cli 2.10.9-20.el8.x86_64 abrt-dbus 2.10.9-20.el8.x86_64 abrt-libs 2.10.9-20.el8.x86_64 abrt-tui 2.10.9-20.el8.x86_64 acl 2.2.53-1.el8.x86_64 aide 0.16-14.el8.x86_64 alsa-lib 1.2.3.2-1.el8.x86_64 ansible 2.9.15-1.el8ae.noarch attr 2.4.48-3.el8.x86_64 audispd-plugins 3.0-0.17.20191104git1c2f876.el8.x86_64 audit 3.0-0.17.20191104git1c2f876.el8.x86_64 audit-libs 3.0-0.17.20191104git1c2f876.el8.x86_64 augeas 1.12.0-5.el8.x86_64 augeas-libs 1.12.0-5.el8.x86_64 authselect 1.2.1-2.el8.x86_64 authselect-compat 1.2.1-2.el8.x86_64 authselect-libs 1.2.1-2.el8.x86_64 autofs 5.1.4-43.el8.x86_64 autogen-libopts 5.18.12-8.el8.x86_64 avahi-libs 0.7-19.el8.x86_64 basesystem 11-5.el8.noarch bash 4.4.19-12.el8.x86_64 bc 1.07.1-5.el8.x86_64 bind-export-libs 9.11.20-5.el8.x86_64 bind-libs 9.11.20-5.el8.x86_64 bind-libs-lite 9.11.20-5.el8.x86_64 bind-license 9.11.20-5.el8.noarch bind-utils 9.11.20-5.el8.x86_64 binutils 2.30-79.el8.x86_64 biosdevname 0.7.3-2.el8.x86_64 blivet-data 3.2.2-6.el8.noarch boost-atomic 1.66.0-10.el8.x86_64 boost-chrono 1.66.0-10.el8.x86_64 boost-date-time 1.66.0-10.el8.x86_64 boost-iostreams 1.66.0-10.el8.x86_64 boost-program-options 1.66.0-10.el8.x86_64 boost-random 1.66.0-10.el8.x86_64 boost-regex 1.66.0-10.el8.x86_64 boost-system 1.66.0-10.el8.x86_64 boost-thread 1.66.0-10.el8.x86_64 brotli 1.0.6-2.el8.x86_64 bzip2 1.0.6-26.el8.x86_64 bzip2-libs 1.0.6-26.el8.x86_64 c-ares 1.13.0-5.el8.x86_64 ca-certificates 2020.2.41-80.0.el8_2.noarch cairo 1.15.12-3.el8.x86_64 celt051 0.5.1.3-15.el8.x86_64 certmonger 0.79.7-15.el8.x86_64 checkpolicy 2.9-1.el8.x86_64 chkconfig 1.13-2.el8.x86_64 chrony 3.5-1.el8.x86_64 clevis 13-3.el8.x86_64 clevis-dracut 13-3.el8.x86_64 clevis-luks 13-3.el8.x86_64 clevis-systemd 13-3.el8.x86_64 cockpit 224.2-1.el8.x86_64 cockpit-bridge 224.2-1.el8.x86_64 cockpit-dashboard 224.2-1.el8.noarch cockpit-ovirt-dashboard 0.14.17-1.el8ev.noarch cockpit-storaged 224.2-1.el8.noarch cockpit-system 224.2-1.el8.noarch cockpit-ws 224.2-1.el8.x86_64 collectd 5.11.0-2.el8ost.x86_64 collectd-disk 5.11.0-2.el8ost.x86_64 collectd-netlink 5.11.0-2.el8ost.x86_64 collectd-virt 5.11.0-2.el8ost.x86_64 collectd-write_http 5.11.0-2.el8ost.x86_64 collectd-write_syslog 5.11.0-2.el8ost.x86_64 coreutils 8.30-8.el8.x86_64 coreutils-common 8.30-8.el8.x86_64 corosynclib 3.0.3-4.el8.x86_64 cpio 2.12-8.el8.x86_64 cracklib 2.9.6-15.el8.x86_64 cracklib-dicts 2.9.6-15.el8.x86_64 cronie 1.5.2-4.el8.x86_64 cronie-anacron 1.5.2-4.el8.x86_64 crontabs 1.11-16.20150630git.el8.noarch crypto-policies 20200713-1.git51d1222.el8.noarch crypto-policies-scripts 20200713-1.git51d1222.el8.noarch cryptsetup 2.3.3-2.el8.x86_64 cryptsetup-libs 2.3.3-2.el8.x86_64 cups-libs 2.2.6-38.el8.x86_64 curl 7.61.1-14.el8_3.1.x86_64 cyrus-sasl 2.1.27-5.el8.x86_64 cyrus-sasl-gssapi 2.1.27-5.el8.x86_64 cyrus-sasl-lib 2.1.27-5.el8.x86_64 cyrus-sasl-scram 2.1.27-5.el8.x86_64 daxctl-libs 67-2.el8.x86_64 dbus 1.12.8-11.el8.x86_64 dbus-common 1.12.8-11.el8.noarch dbus-daemon 1.12.8-11.el8.x86_64 dbus-glib 0.110-2.el8.x86_64 dbus-libs 1.12.8-11.el8.x86_64 dbus-tools 1.12.8-11.el8.x86_64 dbxtool 8-5.el8.x86_64 device-mapper 1.02.171-5.el8.x86_64 device-mapper-event 1.02.171-5.el8.x86_64 device-mapper-event-libs 1.02.171-5.el8.x86_64 device-mapper-libs 1.02.171-5.el8.x86_64 device-mapper-multipath 0.8.4-5.el8.x86_64 device-mapper-multipath-libs 0.8.4-5.el8.x86_64 device-mapper-persistent-data 0.8.5-4.el8.x86_64 dhcp-client 4.3.6-41.el8.x86_64 dhcp-common 4.3.6-41.el8.noarch dhcp-libs 4.3.6-41.el8.x86_64 diffutils 3.6-6.el8.x86_64 dmidecode 3.2-6.el8.x86_64 dnf 4.2.23-4.el8.noarch dnf-data 4.2.23-4.el8.noarch dnf-plugin-subscription-manager 1.27.16-1.el8.x86_64 dnf-plugins-core 4.0.17-5.el8.noarch dnsmasq 2.79-13.el8_3.1.x86_64 dosfstools 4.1-6.el8.x86_64 dracut 049-95.git20200804.el8.x86_64 dracut-config-generic 049-95.git20200804.el8.x86_64 dracut-network 049-95.git20200804.el8.x86_64 dracut-squash 049-95.git20200804.el8.x86_64 e2fsprogs 1.45.6-1.el8.x86_64 e2fsprogs-libs 1.45.6-1.el8.x86_64 edk2-ovmf 20200602gitca407c7246bf-3.el8.noarch efi-filesystem 3-2.el8.noarch efibootmgr 16-1.el8.x86_64 efivar 37-4.el8.x86_64 efivar-libs 37-4.el8.x86_64 elfutils 0.180-1.el8.x86_64 elfutils-default-yama-scope 0.180-1.el8.noarch elfutils-libelf 0.180-1.el8.x86_64 elfutils-libs 0.180-1.el8.x86_64 ethtool 5.0-2.el8.x86_64 expat 2.2.5-4.el8.x86_64 fcoe-utils 1.0.32-7.el8.x86_64 fence-agents-all 4.2.1-53.el8_3.1.x86_64 fence-agents-amt-ws 4.2.1-53.el8_3.1.noarch fence-agents-apc 4.2.1-53.el8_3.1.noarch fence-agents-apc-snmp 4.2.1-53.el8_3.1.noarch fence-agents-bladecenter 4.2.1-53.el8_3.1.noarch fence-agents-brocade 4.2.1-53.el8_3.1.noarch fence-agents-cisco-mds 4.2.1-53.el8_3.1.noarch fence-agents-cisco-ucs 4.2.1-53.el8_3.1.noarch fence-agents-common 4.2.1-53.el8_3.1.noarch fence-agents-compute 4.2.1-53.el8_3.1.noarch fence-agents-drac5 4.2.1-53.el8_3.1.noarch fence-agents-eaton-snmp 4.2.1-53.el8_3.1.noarch fence-agents-emerson 4.2.1-53.el8_3.1.noarch fence-agents-eps 4.2.1-53.el8_3.1.noarch fence-agents-heuristics-ping 4.2.1-53.el8_3.1.noarch fence-agents-hpblade 4.2.1-53.el8_3.1.noarch fence-agents-ibmblade 4.2.1-53.el8_3.1.noarch fence-agents-ifmib 4.2.1-53.el8_3.1.noarch fence-agents-ilo-moonshot 4.2.1-53.el8_3.1.noarch fence-agents-ilo-mp 4.2.1-53.el8_3.1.noarch fence-agents-ilo-ssh 4.2.1-53.el8_3.1.noarch fence-agents-ilo2 4.2.1-53.el8_3.1.noarch fence-agents-intelmodular 4.2.1-53.el8_3.1.noarch fence-agents-ipdu 4.2.1-53.el8_3.1.noarch fence-agents-ipmilan 4.2.1-53.el8_3.1.noarch fence-agents-kdump 4.2.1-53.el8_3.1.x86_64 fence-agents-mpath 4.2.1-53.el8_3.1.noarch fence-agents-redfish 4.2.1-53.el8_3.1.x86_64 fence-agents-rhevm 4.2.1-53.el8_3.1.noarch fence-agents-rsa 4.2.1-53.el8_3.1.noarch fence-agents-rsb 4.2.1-53.el8_3.1.noarch fence-agents-sbd 4.2.1-53.el8_3.1.noarch fence-agents-scsi 4.2.1-53.el8_3.1.noarch fence-agents-vmware-rest 4.2.1-53.el8_3.1.noarch fence-agents-vmware-soap 4.2.1-53.el8_3.1.noarch fence-agents-wti 4.2.1-53.el8_3.1.noarch fence-virt 1.0.0-1.el8.x86_64 file 5.33-16.el8.x86_64 file-libs 5.33-16.el8.x86_64 filesystem 3.8-3.el8.x86_64 findutils 4.6.0-20.el8.x86_64 firewalld 0.8.2-2.el8.noarch firewalld-filesystem 0.8.2-2.el8.noarch fontconfig 2.13.1-3.el8.x86_64 fontpackages-filesystem 1.44-22.el8.noarch freetype 2.9.1-4.el8_3.1.x86_64 fribidi 1.0.4-8.el8.x86_64 fuse 2.9.7-12.el8.x86_64 fuse-common 3.2.1-12.el8.x86_64 fuse-libs 2.9.7-12.el8.x86_64 gawk 4.2.1-1.el8.x86_64 gc 7.6.4-3.el8.x86_64 gdb-headless 8.2-12.el8.x86_64 gdbm 1.18-1.el8.x86_64 gdbm-libs 1.18-1.el8.x86_64 gdisk 1.0.3-6.el8.x86_64 genisoimage 1.1.11-39.el8.x86_64 gettext 0.19.8.1-17.el8.x86_64 gettext-libs 0.19.8.1-17.el8.x86_64 glib-networking 2.56.1-1.1.el8.x86_64 glib2 2.56.4-8.el8.x86_64 glibc 2.28-127.el8.x86_64 glibc-common 2.28-127.el8.x86_64 glibc-langpack-en 2.28-127.el8.x86_64 gluster-ansible-cluster 1.0-3.el8rhgs.noarch gluster-ansible-features 1.0.5-10.el8rhgs.noarch gluster-ansible-infra 1.0.4-18.el8rhgs.noarch gluster-ansible-maintenance 1.0.1-11.el8rhgs.noarch gluster-ansible-repositories 1.0.1-4.el8rhgs.noarch gluster-ansible-roles 1.0.5-23.el8rhgs.noarch glusterfs 6.0-49.el8rhgs.x86_64 glusterfs-api 6.0-49.el8rhgs.x86_64 glusterfs-cli 6.0-49.el8rhgs.x86_64 glusterfs-client-xlators 6.0-49.el8rhgs.x86_64 glusterfs-events 6.0-49.el8rhgs.x86_64 glusterfs-fuse 6.0-49.el8rhgs.x86_64 glusterfs-geo-replication 6.0-49.el8rhgs.x86_64 glusterfs-libs 6.0-49.el8rhgs.x86_64 glusterfs-rdma 6.0-49.el8rhgs.x86_64 glusterfs-selinux 1.0-2.el8rhgs.noarch glusterfs-server 6.0-49.el8rhgs.x86_64 gmp 6.1.2-10.el8.x86_64 gnupg2 2.2.20-2.el8.x86_64 gnutls 3.6.14-7.el8_3.x86_64 gnutls-dane 3.6.14-7.el8_3.x86_64 gnutls-utils 3.6.14-7.el8_3.x86_64 gobject-introspection 1.56.1-1.el8.x86_64 gpgme 1.13.1-3.el8.x86_64 graphite2 1.3.10-10.el8.x86_64 grep 3.1-6.el8.x86_64 groff-base 1.22.3-18.el8.x86_64 grub2-common 2.02-90.el8.noarch grub2-efi-x64 2.02-90.el8.x86_64 grub2-pc 2.02-90.el8.x86_64 grub2-pc-modules 2.02-90.el8.noarch grub2-tools 2.02-90.el8.x86_64 grub2-tools-extra 2.02-90.el8.x86_64 grub2-tools-minimal 2.02-90.el8.x86_64 grubby 8.40-41.el8.x86_64 gsettings-desktop-schemas 3.32.0-5.el8.x86_64 gssproxy 0.8.0-16.el8.x86_64 gstreamer1 1.16.1-2.el8.x86_64 gstreamer1-plugins-base 1.16.1-1.el8.x86_64 guile 2.0.14-7.el8.x86_64 gzip 1.9-9.el8.x86_64 harfbuzz 1.7.5-3.el8.x86_64 hdparm 9.54-2.el8.x86_64 hexedit 1.2.13-12.el8.x86_64 hivex 1.3.18-20.module+el8.3.0+6124+819ee737.x86_64 hostname 3.20-6.el8.x86_64 hwdata 0.314-8.6.el8.noarch ima-evm-utils 1.1-5.el8.x86_64 imgbased 1.2.16-0.1.el8ev.noarch info 6.5-6.el8.x86_64 initscripts 10.00.9-1.el8.x86_64 insights-client 3.1.1-1.el8_3.noarch ioprocess 1.4.2-1.el8ev.x86_64 iotop 0.6-16.el8.noarch ipa-client 4.8.7-13.module+el8.3.0+8376+0bba7131.x86_64 ipa-client-common 4.8.7-13.module+el8.3.0+8376+0bba7131.noarch ipa-common 4.8.7-13.module+el8.3.0+8376+0bba7131.noarch ipa-selinux 4.8.7-13.module+el8.3.0+8376+0bba7131.noarch ipcalc 0.2.4-4.el8.x86_64 iperf3 3.5-6.el8.x86_64 ipmitool 1.8.18-17.el8.x86_64 iproute 5.3.0-5.el8.x86_64 iproute-tc 5.3.0-5.el8.x86_64 iprutils 2.4.19-1.el8.x86_64 ipset 7.1-1.el8.x86_64 ipset-libs 7.1-1.el8.x86_64 iptables 1.8.4-15.el8_3.3.x86_64 iptables-ebtables 1.8.4-15.el8_3.3.x86_64 iptables-libs 1.8.4-15.el8_3.3.x86_64 iputils 20180629-2.el8.x86_64 ipxe-roms-qemu 20181214-6.git133f4c47.el8.noarch irqbalance 1.4.0-4.el8.x86_64 iscsi-initiator-utils 6.2.0.878-5.gitd791ce0.el8.x86_64 iscsi-initiator-utils-iscsiuio 6.2.0.878-5.gitd791ce0.el8.x86_64 isns-utils-libs 0.99-1.el8.x86_64 iso-codes 3.79-2.el8.noarch iwl100-firmware 39.31.5.1-101.el8_3.1.noarch iwl1000-firmware 39.31.5.1-101.el8_3.1.noarch iwl105-firmware 18.168.6.1-101.el8_3.1.noarch iwl135-firmware 18.168.6.1-101.el8_3.1.noarch iwl2000-firmware 18.168.6.1-101.el8_3.1.noarch iwl2030-firmware 18.168.6.1-101.el8_3.1.noarch iwl3160-firmware 25.30.13.0-101.el8_3.1.noarch iwl5000-firmware 8.83.5.1_1-101.el8_3.1.noarch iwl5150-firmware 8.24.2.2-101.el8_3.1.noarch iwl6000-firmware 9.221.4.1-101.el8_3.1.noarch iwl6000g2a-firmware 18.168.6.1-101.el8_3.1.noarch iwl6050-firmware 41.28.5.1-101.el8_3.1.noarch iwl7260-firmware 25.30.13.0-101.el8_3.1.noarch jansson 2.11-3.el8.x86_64 jose 10-2.el8.x86_64 jq 1.5-12.el8.x86_64 json-c 0.13.1-0.2.el8.x86_64 json-glib 1.4.4-1.el8.x86_64 kbd 2.0.4-10.el8.x86_64 kbd-legacy 2.0.4-10.el8.noarch kbd-misc 2.0.4-10.el8.noarch kernel 4.18.0-240.10.1.el8_3.x86_64 kernel-core 4.18.0-240.10.1.el8_3.x86_64 kernel-modules 4.18.0-240.10.1.el8_3.x86_64 kernel-tools 4.18.0-240.10.1.el8_3.x86_64 kernel-tools-libs 4.18.0-240.10.1.el8_3.x86_64 kexec-tools 2.0.20-34.el8_3.1.x86_64 keyutils 1.5.10-6.el8.x86_64 keyutils-libs 1.5.10-6.el8.x86_64 kmod 25-16.el8.x86_64 kmod-kvdo 6.2.3.114-74.el8.x86_64 kmod-libs 25-16.el8.x86_64 kpartx 0.8.4-5.el8.x86_64 krb5-libs 1.18.2-5.el8.x86_64 krb5-workstation 1.18.2-5.el8.x86_64 langpacks-en 1.0-12.el8.noarch less 530-1.el8.x86_64 libX11 1.6.8-3.el8.x86_64 libX11-common 1.6.8-3.el8.noarch libX11-xcb 1.6.8-3.el8.x86_64 libXau 1.0.9-3.el8.x86_64 libXdamage 1.1.4-14.el8.x86_64 libXext 1.3.4-1.el8.x86_64 libXfixes 5.0.3-7.el8.x86_64 libXft 2.3.3-1.el8.x86_64 libXrender 0.9.10-7.el8.x86_64 libXv 1.0.11-7.el8.x86_64 libXxf86vm 1.1.4-9.el8.x86_64 libacl 2.2.53-1.el8.x86_64 libaio 0.3.112-1.el8.x86_64 libarchive 3.3.2-9.el8.x86_64 libassuan 2.5.1-3.el8.x86_64 libatasmart 0.19-14.el8.x86_64 libatomic_ops 7.6.2-3.el8.x86_64 libattr 2.4.48-3.el8.x86_64 libbabeltrace 1.5.4-3.el8.x86_64 libbasicobjects 0.1.1-39.el8.x86_64 libblkid 2.32.1-24.el8.x86_64 libblockdev 2.24-1.el8.x86_64 libblockdev-crypto 2.24-1.el8.x86_64 libblockdev-dm 2.24-1.el8.x86_64 libblockdev-fs 2.24-1.el8.x86_64 libblockdev-kbd 2.24-1.el8.x86_64 libblockdev-loop 2.24-1.el8.x86_64 libblockdev-lvm 2.24-1.el8.x86_64 libblockdev-mdraid 2.24-1.el8.x86_64 libblockdev-mpath 2.24-1.el8.x86_64 libblockdev-nvdimm 2.24-1.el8.x86_64 libblockdev-part 2.24-1.el8.x86_64 libblockdev-plugins-all 2.24-1.el8.x86_64 libblockdev-swap 2.24-1.el8.x86_64 libblockdev-utils 2.24-1.el8.x86_64 libblockdev-vdo 2.24-1.el8.x86_64 libbytesize 1.4-3.el8.x86_64 libcacard 2.7.0-2.el8_1.x86_64 libcap 2.26-4.el8.x86_64 libcap-ng 0.7.9-5.el8.x86_64 libcollection 0.7.0-39.el8.x86_64 libcom_err 1.45.6-1.el8.x86_64 libcomps 0.1.11-4.el8.x86_64 libconfig 1.5-9.el8.x86_64 libcroco 0.6.12-4.el8_2.1.x86_64 libcurl 7.61.1-14.el8_3.1.x86_64 libdaemon 0.14-15.el8.x86_64 libdatrie 0.2.9-7.el8.x86_64 libdb 5.3.28-39.el8.x86_64 libdb-utils 5.3.28-39.el8.x86_64 libdhash 0.5.0-39.el8.x86_64 libdnf 0.48.0-5.el8.x86_64 libdrm 2.4.101-1.el8.x86_64 libedit 3.1-23.20170329cvs.el8.x86_64 libepoxy 1.5.3-1.el8.x86_64 libestr 0.1.10-1.el8.x86_64 libevent 2.1.8-5.el8.x86_64 libfastjson 0.99.8-2.el8.x86_64 libfdisk 2.32.1-24.el8.x86_64 libffi 3.1-22.el8.x86_64 libgcc 8.3.1-5.1.el8.x86_64 libgcrypt 1.8.5-4.el8.x86_64 libglvnd 1.2.0-6.el8.x86_64 libglvnd-egl 1.2.0-6.el8.x86_64 libglvnd-gles 1.2.0-6.el8.x86_64 libglvnd-glx 1.2.0-6.el8.x86_64 libgomp 8.3.1-5.1.el8.x86_64 libgpg-error 1.31-1.el8.x86_64 libgudev 232-4.el8.x86_64 libguestfs 1.42.0-2.module+el8.3.0+6798+ad6e66be.x86_64 libguestfs-tools-c 1.42.0-2.module+el8.3.0+6798+ad6e66be.x86_64 libguestfs-winsupport 8.2-1.module+el8.3.0+6124+819ee737.x86_64 libibumad 29.0-3.el8.x86_64 libibverbs 29.0-3.el8.x86_64 libicu 60.3-2.el8_1.x86_64 libidn2 2.2.0-1.el8.x86_64 libini_config 1.3.1-39.el8.x86_64 libipa_hbac 2.3.0-9.el8.x86_64 libipt 1.6.1-8.el8.x86_64 libiscsi 1.18.0-8.module+el8.3.0+6124+819ee737.x86_64 libjose 10-2.el8.x86_64 libjpeg-turbo 1.5.3-10.el8.x86_64 libkadm5 1.18.2-5.el8.x86_64 libkcapi 1.2.0-2.el8.x86_64 libkcapi-hmaccalc 1.2.0-2.el8.x86_64 libksba 1.3.5-7.el8.x86_64 libldb 2.1.3-2.el8.x86_64 liblognorm 2.0.5-1.el8.x86_64 libluksmeta 9-4.el8.x86_64 libmaxminddb 1.2.0-10.el8.x86_64 libmetalink 0.1.3-7.el8.x86_64 libmnl 1.0.4-6.el8.x86_64 libmodman 2.0.1-17.el8.x86_64 libmodulemd 2.9.4-2.el8.x86_64 libmount 2.32.1-24.el8.x86_64 libndp 1.7-3.el8.x86_64 libnetfilter_conntrack 1.0.6-5.el8.x86_64 libnfnetlink 1.0.1-13.el8.x86_64 libnfsidmap 2.3.3-35.el8.x86_64 libnftnl 1.1.5-4.el8.x86_64 libnghttp2 1.33.0-3.el8_2.1.x86_64 libnl3 3.5.0-1.el8.x86_64 libnl3-cli 3.5.0-1.el8.x86_64 libnsl2 1.2.0-2.20180605git4a062cf.el8.x86_64 libogg 1.3.2-10.el8.x86_64 libosinfo 1.8.0-1.el8.x86_64 libpath_utils 0.2.1-39.el8.x86_64 libpcap 1.9.1-4.el8.x86_64 libpciaccess 0.14-1.el8.x86_64 libpipeline 1.5.0-2.el8.x86_64 libpkgconf 1.4.2-1.el8.x86_64 libpmem 1.6.1-1.el8.x86_64 libpng 1.6.34-5.el8.x86_64 libproxy 0.4.15-5.2.el8.x86_64 libpsl 0.20.2-6.el8.x86_64 libpwquality 1.4.0-9.el8.x86_64 libqb 1.0.3-12.el8.x86_64 librados2 12.2.7-9.el8.x86_64 librbd1 12.2.7-9.el8.x86_64 librdmacm 29.0-3.el8.x86_64 libref_array 0.1.5-39.el8.x86_64 librepo 1.12.0-2.el8.x86_64 libreport 2.9.5-15.el8.x86_64 libreport-cli 2.9.5-15.el8.x86_64 libreport-filesystem 2.9.5-15.el8.x86_64 libreport-plugin-rhtsupport 2.9.5-15.el8.x86_64 libreport-plugin-ureport 2.9.5-15.el8.x86_64 libreport-rhel 2.9.5-15.el8.x86_64 libreport-web 2.9.5-15.el8.x86_64 librhsm 0.0.3-3.el8.x86_64 libseccomp 2.4.3-1.el8.x86_64 libselinux 2.9-4.el8_3.x86_64 libselinux-utils 2.9-4.el8_3.x86_64 libsemanage 2.9-3.el8.x86_64 libsepol 2.9-1.el8.x86_64 libsigsegv 2.11-5.el8.x86_64 libsmartcols 2.32.1-24.el8.x86_64 libsolv 0.7.11-1.el8.x86_64 libsoup 2.62.3-2.el8.x86_64 libss 1.45.6-1.el8.x86_64 libssh 0.9.4-2.el8.x86_64 libssh-config 0.9.4-2.el8.noarch libsss_autofs 2.3.0-9.el8.x86_64 libsss_certmap 2.3.0-9.el8.x86_64 libsss_idmap 2.3.0-9.el8.x86_64 libsss_nss_idmap 2.3.0-9.el8.x86_64 libsss_simpleifp 2.3.0-9.el8.x86_64 libstdc++ 8.3.1-5.1.el8.x86_64 libsysfs 2.1.0-24.el8.x86_64 libtalloc 2.3.1-3.el8rhgs.x86_64 libtar 1.2.20-15.el8.x86_64 libtasn1 4.13-3.el8.x86_64 libtdb 1.4.3-2.el8rhgs.x86_64 libteam 1.31-2.el8.x86_64 libtevent 0.10.2-3.el8rhgs.x86_64 libthai 0.1.27-2.el8.x86_64 libtheora 1.1.1-21.el8.x86_64 libtirpc 1.1.4-4.el8.x86_64 libtool-ltdl 2.4.6-25.el8.x86_64 libtpms 0.7.3-1.20200818git1d392d466a.module+el8.3.0+8092+f9e72d7e.x86_64 libudisks2 2.9.0-3.el8.x86_64 libunistring 0.9.9-3.el8.x86_64 libusal 1.1.11-39.el8.x86_64 libusbx 1.0.23-4.el8.x86_64 libuser 0.62-23.el8.x86_64 libutempter 1.1.6-14.el8.x86_64 libuuid 2.32.1-24.el8.x86_64 libverto 0.3.0-5.el8.x86_64 libverto-libevent 0.3.0-5.el8.x86_64 libvirt 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-admin 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-bash-completion 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-client 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-config-network 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-config-nwfilter 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-interface 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-network 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-nodedev 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-nwfilter 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-qemu 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-secret 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-core 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-disk 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-gluster 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-iscsi 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-iscsi-direct 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-logical 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-mpath 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-rbd 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-driver-storage-scsi 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-daemon-kvm 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-libs 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvirt-lock-sanlock 6.6.0-7.3.module+el8.3.0+9547+7d548490.x86_64 libvisual 0.4.0-24.el8.x86_64 libvorbis 1.3.6-2.el8.x86_64 libwayland-client 1.17.0-1.el8.x86_64 libwayland-cursor 1.17.0-1.el8.x86_64 libwayland-egl 1.17.0-1.el8.x86_64 libwayland-server 1.17.0-1.el8.x86_64 libwbclient 4.12.3-12.el8.3.x86_64 libwsman1 2.6.5-7.el8.x86_64 libxcb 1.13.1-1.el8.x86_64 libxcrypt 4.1.1-4.el8.x86_64 libxkbcommon 0.9.1-1.el8.x86_64 libxml2 2.9.7-8.el8.x86_64 libxshmfence 1.3-2.el8.x86_64 libxslt 1.1.32-5.el8.x86_64 libyaml 0.1.7-5.el8.x86_64 libzstd 1.4.4-1.el8.x86_64 linux-firmware 20200619-101.git3890db36.el8_3.noarch lksctp-tools 1.0.18-3.el8.x86_64 lldpad 1.0.1-13.git036e314.el8.x86_64 llvm-libs 10.0.1-3.module+el8.3.0+7719+53d428de.x86_64 lm_sensors-libs 3.4.0-21.20180522git70f7e08.el8.x86_64 logrotate 3.14.0-4.el8.x86_64 lshw B.02.19.2-2.el8.x86_64 lsof 4.93.2-1.el8.x86_64 lsscsi 0.30-1.el8.x86_64 lua-libs 5.3.4-11.el8.x86_64 luksmeta 9-4.el8.x86_64 lvm2 2.03.09-5.el8.x86_64 lvm2-libs 2.03.09-5.el8.x86_64 lz4 1.8.3-2.el8.x86_64 lz4-libs 1.8.3-2.el8.x86_64 lzo 2.08-14.el8.x86_64 lzop 1.03-20.el8.x86_64 mailx 12.5-29.el8.x86_64 man-db 2.7.6.1-17.el8.x86_64 mariadb-connector-c 3.1.11-2.el8_3.x86_64 mariadb-connector-c-config 3.1.11-2.el8_3.noarch mdadm 4.1-14.el8.x86_64 mdevctl 0.61-3.el8.noarch memtest86+ 5.01-19.el8.x86_64 mesa-dri-drivers 20.1.4-1.el8.x86_64 mesa-filesystem 20.1.4-1.el8.x86_64 mesa-libEGL 20.1.4-1.el8.x86_64 mesa-libGL 20.1.4-1.el8.x86_64 mesa-libgbm 20.1.4-1.el8.x86_64 mesa-libglapi 20.1.4-1.el8.x86_64 microcode_ctl 20200609-2.20201112.1.el8_3.x86_64 mokutil 0.3.0-10.el8.x86_64 mom 0.6.0-1.el8ev.noarch mozjs60 60.9.0-4.el8.x86_64 mpfr 3.1.6-1.el8.x86_64 mtools 4.0.18-14.el8.x86_64 nbdkit 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 nbdkit-basic-filters 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 nbdkit-basic-plugins 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 nbdkit-curl-plugin 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 nbdkit-python-plugin 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 nbdkit-server 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 nbdkit-ssh-plugin 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 nbdkit-vddk-plugin 1.22.0-2.module+el8.3.0+8203+18ecf00e.x86_64 ncurses 6.1-7.20180224.el8.x86_64 ncurses-base 6.1-7.20180224.el8.noarch ncurses-libs 6.1-7.20180224.el8.x86_64 ndctl 67-2.el8.x86_64 ndctl-libs 67-2.el8.x86_64 net-snmp 5.8-18.el8_3.1.x86_64 net-snmp-agent-libs 5.8-18.el8_3.1.x86_64 net-snmp-libs 5.8-18.el8_3.1.x86_64 net-snmp-utils 5.8-18.el8_3.1.x86_64 netcf-libs 0.2.8-12.module+el8.3.0+6124+819ee737.x86_64 nettle 3.4.1-2.el8.x86_64 network-scripts 10.00.9-1.el8.x86_64 newt 0.52.20-11.el8.x86_64 nfs-utils 2.3.3-35.el8.x86_64 nftables 0.9.3-16.el8.x86_64 nmap-ncat 7.70-5.el8.x86_64 nmstate 0.3.4-17.el8_3.noarch npth 1.5-4.el8.x86_64 nspr 4.25.0-2.el8_2.x86_64 nss 3.53.1-11.el8_2.x86_64 nss-softokn 3.53.1-11.el8_2.x86_64 nss-softokn-freebl 3.53.1-11.el8_2.x86_64 nss-sysinit 3.53.1-11.el8_2.x86_64 nss-tools 3.53.1-11.el8_2.x86_64 nss-util 3.53.1-11.el8_2.x86_64 numactl 2.0.12-11.el8.x86_64 numactl-libs 2.0.12-11.el8.x86_64 numad 0.5-26.20150602git.el8.x86_64 oddjob 0.34.5-3.el8.x86_64 oddjob-mkhomedir 0.34.5-3.el8.x86_64 oniguruma 6.8.2-2.el8.x86_64 openldap 2.4.46-15.el8.x86_64 opensc 0.20.0-2.el8.x86_64 openscap 1.3.3-6.el8_3.x86_64 openscap-scanner 1.3.3-6.el8_3.x86_64 openssh 8.0p1-5.el8.x86_64 openssh-clients 8.0p1-5.el8.x86_64 openssh-server 8.0p1-5.el8.x86_64 openssl 1.1.1g-12.el8_3.x86_64 openssl-libs 1.1.1g-12.el8_3.x86_64 openvswitch-selinux-extra-policy 1.0-22.el8fdp.noarch openvswitch2.11 2.11.3-74.el8fdp.x86_64 openwsman-python3 2.6.5-7.el8.x86_64 opus 1.3-0.4.beta.el8.x86_64 orc 0.4.28-3.el8.x86_64 os-prober 1.74-6.el8.x86_64 osinfo-db 20200813-1.el8.noarch osinfo-db-tools 1.8.0-1.el8.x86_64 otopi-common 1.9.2-1.el8ev.noarch ovirt-ansible-collection 1.2.4-1.el8ev.noarch ovirt-host 4.4.1-4.el8ev.x86_64 ovirt-host-dependencies 4.4.1-4.el8ev.x86_64 ovirt-hosted-engine-ha 2.4.5-1.el8ev.noarch ovirt-hosted-engine-setup 2.4.9-2.el8ev.noarch ovirt-imageio-client 2.1.1-1.el8ev.x86_64 ovirt-imageio-common 2.1.1-1.el8ev.x86_64 ovirt-imageio-daemon 2.1.1-1.el8ev.x86_64 ovirt-node-ng-nodectl 4.4.0-1.el8ev.noarch ovirt-provider-ovn-driver 1.2.33-1.el8ev.noarch ovirt-vmconsole 1.0.9-1.el8ev.noarch ovirt-vmconsole-host 1.0.9-1.el8ev.noarch ovn2.11 2.11.1-56.el8fdp.x86_64 ovn2.11-host 2.11.1-56.el8fdp.x86_64 p11-kit 0.23.14-5.el8_0.x86_64 p11-kit-trust 0.23.14-5.el8_0.x86_64 pacemaker-cluster-libs 2.0.4-6.el8_3.1.x86_64 pacemaker-libs 2.0.4-6.el8_3.1.x86_64 pacemaker-schemas 2.0.4-6.el8_3.1.noarch pam 1.3.1-11.el8.x86_64 pango 1.42.4-6.el8.x86_64 parted 3.2-38.el8.x86_64 passwd 0.80-3.el8.x86_64 pciutils 3.6.4-2.el8.x86_64 pciutils-libs 3.6.4-2.el8.x86_64 pcre 8.42-4.el8.x86_64 pcre2 10.32-2.el8.x86_64 pcsc-lite 1.8.23-3.el8.x86_64 pcsc-lite-ccid 1.4.29-4.el8.x86_64 pcsc-lite-libs 1.8.23-3.el8.x86_64 perl-Carp 1.42-396.el8.noarch perl-Data-Dumper 2.167-399.el8.x86_64 perl-Errno 1.28-416.el8.x86_64 perl-Exporter 5.72-396.el8.noarch perl-File-Path 2.15-2.el8.noarch perl-IO 1.38-416.el8.x86_64 perl-PathTools 3.74-1.el8.x86_64 perl-Scalar-List-Utils 1.49-2.el8.x86_64 perl-Socket 2.027-3.el8.x86_64 perl-Text-Tabs+Wrap 2013.0523-395.el8.noarch perl-Unicode-Normalize 1.25-396.el8.x86_64 perl-constant 1.33-396.el8.noarch perl-interpreter 5.26.3-416.el8.x86_64 perl-libs 5.26.3-416.el8.x86_64 perl-macros 5.26.3-416.el8.x86_64 perl-parent 0.237-1.el8.noarch perl-threads 2.21-2.el8.x86_64 perl-threads-shared 1.58-2.el8.x86_64 pixman 0.38.4-1.el8.x86_64 pkgconf 1.4.2-1.el8.x86_64 pkgconf-m4 1.4.2-1.el8.noarch pkgconf-pkg-config 1.4.2-1.el8.x86_64 platform-python 3.6.8-31.el8.x86_64 platform-python-pip 9.0.3-18.el8.noarch platform-python-setuptools 39.2.0-6.el8.noarch policycoreutils 2.9-9.el8.x86_64 policycoreutils-python-utils 2.9-9.el8.noarch polkit 0.115-11.el8.x86_64 polkit-libs 0.115-11.el8.x86_64 polkit-pkla-compat 0.1-12.el8.x86_64 popt 1.16-14.el8.x86_64 postfix 3.3.1-12.el8.x86_64 prefixdevname 0.1.0-6.el8.x86_64 procps-ng 3.3.15-3.el8.x86_64 psmisc 23.1-5.el8.x86_64 publicsuffix-list-dafsa 20180723-1.el8.noarch python3-abrt 2.10.9-20.el8.x86_64 python3-abrt-addon 2.10.9-20.el8.x86_64 python3-argcomplete 1.9.3-6.el8.noarch python3-asn1crypto 0.24.0-3.el8.noarch python3-audit 3.0-0.17.20191104git1c2f876.el8.x86_64 python3-augeas 0.5.0-12.el8.noarch python3-babel 2.5.1-5.el8.noarch python3-bind 9.11.20-5.el8.noarch python3-blivet 3.2.2-6.el8.noarch python3-blockdev 2.24-1.el8.x86_64 python3-bytesize 1.4-3.el8.x86_64 python3-cffi 1.11.5-5.el8.x86_64 python3-chardet 3.0.4-7.el8.noarch python3-configobj 5.0.6-11.el8.noarch python3-cryptography 2.3-3.el8.x86_64 python3-daemon 2.1.2-9.el8ar.noarch python3-dateutil 2.6.1-6.el8.noarch python3-dbus 1.2.4-15.el8.x86_64 python3-decorator 4.2.1-2.el8.noarch python3-dmidecode 3.12.2-15.el8.x86_64 python3-dnf 4.2.23-4.el8.noarch python3-dnf-plugin-versionlock 4.0.17-5.el8.noarch python3-dnf-plugins-core 4.0.17-5.el8.noarch python3-dns 1.15.0-10.el8.noarch python3-docutils 0.14-12.module+el8.1.0+3334+5cb623d7.noarch python3-ethtool 0.14-3.el8.x86_64 python3-firewall 0.8.2-2.el8.noarch python3-gluster 6.0-49.el8rhgs.x86_64 python3-gobject-base 3.28.3-2.el8.x86_64 python3-gpg 1.13.1-3.el8.x86_64 python3-gssapi 1.5.1-5.el8.x86_64 python3-hawkey 0.48.0-5.el8.x86_64 python3-idna 2.5-5.el8.noarch python3-imgbased 1.2.16-0.1.el8ev.noarch python3-iniparse 0.4-31.el8.noarch python3-inotify 0.9.6-13.el8.noarch python3-ioprocess 1.4.2-1.el8ev.x86_64 python3-ipaclient 4.8.7-13.module+el8.3.0+8376+0bba7131.noarch python3-ipalib 4.8.7-13.module+el8.3.0+8376+0bba7131.noarch python3-jinja2 2.10.1-2.el8_0.noarch python3-jmespath 0.9.0-11.el8.noarch python3-jsonschema 2.6.0-4.el8.noarch python3-jwcrypto 0.5.0-1.module+el8.1.0+4098+f286395e.noarch python3-ldap 3.1.0-5.el8.x86_64 python3-libcomps 0.1.11-4.el8.x86_64 python3-libdnf 0.48.0-5.el8.x86_64 python3-libipa_hbac 2.3.0-9.el8.x86_64 python3-libnmstate 0.3.4-17.el8_3.noarch python3-librepo 1.12.0-2.el8.x86_64 python3-libreport 2.9.5-15.el8.x86_64 python3-libs 3.6.8-31.el8.x86_64 python3-libselinux 2.9-4.el8_3.x86_64 python3-libsemanage 2.9-3.el8.x86_64 python3-libvirt 6.6.0-1.module+el8.3.0+7572+bcbf6b90.x86_64 python3-libxml2 2.9.7-8.el8.x86_64 python3-linux-procfs 0.6.2-2.el8.noarch python3-lockfile 0.11.0-8.el8ar.noarch python3-lxml 4.2.3-1.el8.x86_64 python3-magic 5.33-16.el8.noarch python3-markupsafe 0.23-19.el8.x86_64 python3-netaddr 0.7.19-8.1.el8ost.noarch python3-netifaces 0.10.6-4.el8.x86_64 python3-nftables 0.9.3-16.el8.x86_64 python3-openvswitch2.11 2.11.3-74.el8fdp.x86_64 python3-otopi 1.9.2-1.el8ev.noarch python3-ovirt-engine-sdk4 4.4.7-1.el8ev.x86_64 python3-ovirt-node-ng-nodectl 4.4.0-1.el8ev.noarch python3-ovirt-setup-lib 1.3.2-1.el8ev.noarch python3-passlib 1.7.0-5.el8ost.noarch python3-perf 4.18.0-240.10.1.el8_3.x86_64 python3-pexpect 4.6-2.el8ost.noarch python3-pip 9.0.3-18.el8.noarch python3-pip-wheel 9.0.3-18.el8.noarch python3-ply 3.9-8.el8.noarch python3-policycoreutils 2.9-9.el8.noarch python3-prettytable 0.7.2-14.el8.noarch python3-ptyprocess 0.5.2-4.el8.noarch python3-pyasn1 0.3.7-6.el8.noarch python3-pyasn1-modules 0.3.7-6.el8.noarch python3-pycparser 2.14-14.el8.noarch python3-pycurl 7.43.0.2-4.el8.x86_64 python3-pyparted 3.11.0-13.el8.x86_64 python3-pysocks 1.6.8-3.el8.noarch python3-pytz 2017.2-9.el8.noarch python3-pyudev 0.21.0-7.el8.noarch python3-pyusb 1.0.0-9.module+el8.1.0+4098+f286395e.noarch python3-pyxattr 0.5.3-19.el8ost.x86_64 python3-pyyaml 3.12-12.el8.x86_64 python3-qrcode-core 5.1-12.module+el8.1.0+4098+f286395e.noarch python3-requests 2.20.0-2.1.el8_1.noarch python3-rpm 4.14.3-4.el8.x86_64 python3-sanlock 3.8.2-1.el8.x86_64 python3-schedutils 0.6-6.el8.x86_64 python3-setools 4.3.0-2.el8.x86_64 python3-setuptools 39.2.0-6.el8.noarch python3-setuptools-wheel 39.2.0-6.el8.noarch python3-six 1.12.0-1.el8ost.noarch python3-slip 0.6.4-11.el8.noarch python3-slip-dbus 0.6.4-11.el8.noarch python3-sss 2.3.0-9.el8.x86_64 python3-sss-murmur 2.3.0-9.el8.x86_64 python3-sssdconfig 2.3.0-9.el8.noarch python3-subscription-manager-rhsm 1.27.16-1.el8.x86_64 python3-suds 0.7-0.8.94664ddd46a6.el8.noarch python3-syspurpose 1.27.16-1.el8.x86_64 python3-systemd 234-8.el8.x86_64 python3-urllib3 1.24.2-4.el8.noarch python3-yubico 1.3.2-9.module+el8.1.0+4098+f286395e.noarch python36 3.6.8-2.module+el8.1.0+3334+5cb623d7.x86_64 qemu-img 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm-block-curl 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm-block-gluster 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm-block-iscsi 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm-block-rbd 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm-block-ssh 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm-common 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 qemu-kvm-core 5.1.0-14.module+el8.3.0+8790+80f9c6d8.1.x86_64 quota 4.04-10.el8.x86_64 quota-nls 4.04-10.el8.noarch radvd 2.17-15.el8.x86_64 rdma-core 29.0-3.el8.x86_64 readline 7.0-10.el8.x86_64 redhat-release-virtualization-host 4.4.4-1.el8ev.x86_64 redhat-release-virtualization-host-content 4.4.4-1.el8ev.x86_64 redhat-virtualization-host-image-update-placeholder 4.4.4-1.el8ev.noarch rhsm-icons 1.27.16-1.el8.noarch rhv-openvswitch 2.11-7.el8ev.noarch rhv-openvswitch-ovn-common 2.11-7.el8ev.noarch rhv-openvswitch-ovn-host 2.11-7.el8ev.noarch rhv-python-openvswitch 2.11-7.el8ev.noarch rng-tools 6.8-3.el8.x86_64 rootfiles 8.1-22.el8.noarch rpcbind 1.2.5-7.el8.x86_64 rpm 4.14.3-4.el8.x86_64 rpm-build-libs 4.14.3-4.el8.x86_64 rpm-libs 4.14.3-4.el8.x86_64 rpm-plugin-selinux 4.14.3-4.el8.x86_64 rsync 3.1.3-9.el8.x86_64 rsyslog 8.1911.0-6.el8.x86_64 rsyslog-elasticsearch 8.1911.0-6.el8.x86_64 rsyslog-mmjsonparse 8.1911.0-6.el8.x86_64 rsyslog-mmnormalize 8.1911.0-6.el8.x86_64 safelease 1.0.1-1.el8ev.x86_64 samba-client-libs 4.12.3-12.el8.3.x86_64 samba-common 4.12.3-12.el8.3.noarch samba-common-libs 4.12.3-12.el8.3.x86_64 sanlock 3.8.2-1.el8.x86_64 sanlock-lib 3.8.2-1.el8.x86_64 satyr 0.26-2.el8.x86_64 sbd 1.4.1-7.el8.x86_64 scap-security-guide 0.1.48-1.el8ev.noarch scap-security-guide-rhv 0.1.48-1.el8ev.noarch scrub 2.5.2-14.el8.x86_64 seabios-bin 1.14.0-1.module+el8.3.0+7638+07cf13d2.noarch seavgabios-bin 1.14.0-1.module+el8.3.0+7638+07cf13d2.noarch sed 4.5-2.el8.x86_64 selinux-policy 3.14.3-54.el8.noarch selinux-policy-targeted 3.14.3-54.el8.noarch setup 2.12.2-6.el8.noarch sg3_utils 1.44-5.el8.x86_64 sg3_utils-libs 1.44-5.el8.x86_64 sgabios-bin 0.20170427git-3.module+el8.3.0+6124+819ee737.noarch shadow-utils 4.6-11.el8.x86_64 shim-x64 15-16.el8.x86_64 slang 2.3.2-3.el8.x86_64 snappy 1.1.8-3.el8.x86_64 socat 1.7.3.3-2.el8.x86_64 sos 3.9.1-6.el8.noarch spice-server 0.14.3-3.el8.x86_64 sqlite-libs 3.26.0-11.el8.x86_64 squashfs-tools 4.3-19.el8.x86_64 sscg 2.3.3-14.el8.x86_64 sshpass 1.06-3.el8ae.x86_64 sssd-client 2.3.0-9.el8.x86_64 sssd-common 2.3.0-9.el8.x86_64 sssd-common-pac 2.3.0-9.el8.x86_64 sssd-dbus 2.3.0-9.el8.x86_64 sssd-ipa 2.3.0-9.el8.x86_64 sssd-kcm 2.3.0-9.el8.x86_64 sssd-krb5-common 2.3.0-9.el8.x86_64 sssd-tools 2.3.0-9.el8.x86_64 subscription-manager 1.27.16-1.el8.x86_64 subscription-manager-cockpit 1.27.16-1.el8.noarch subscription-manager-rhsm-certificates 1.27.16-1.el8.x86_64 sudo 1.8.29-6.el8_3.1.x86_64 supermin 5.2.0-1.module+el8.3.0+7648+42900458.x86_64 swtpm 0.4.0-3.20200828git0c238a2.module+el8.3.0+8254+568ca30d.x86_64 swtpm-libs 0.4.0-3.20200828git0c238a2.module+el8.3.0+8254+568ca30d.x86_64 swtpm-tools 0.4.0-3.20200828git0c238a2.module+el8.3.0+8254+568ca30d.x86_64 syslinux 6.04-4.el8.x86_64 syslinux-extlinux 6.04-4.el8.x86_64 syslinux-extlinux-nonlinux 6.04-4.el8.noarch syslinux-nonlinux 6.04-4.el8.noarch sysstat 11.7.3-5.el8.x86_64 systemd 239-41.el8_3.1.x86_64 systemd-container 239-41.el8_3.1.x86_64 systemd-libs 239-41.el8_3.1.x86_64 systemd-pam 239-41.el8_3.1.x86_64 systemd-udev 239-41.el8_3.1.x86_64 tar 1.30-5.el8.x86_64 tcpdump 4.9.3-1.el8.x86_64 teamd 1.31-2.el8.x86_64 tmux 2.7-1.el8.x86_64 tpm2-tools 4.1.1-1.el8.x86_64 tpm2-tss 2.3.2-2.el8.x86_64 tree 1.7.0-15.el8.x86_64 trousers 0.3.14-4.el8.x86_64 trousers-lib 0.3.14-4.el8.x86_64 tuned 2.14.0-3.el8_3.1.noarch tzdata 2021a-1.el8.noarch udisks2 2.9.0-3.el8.x86_64 unbound-libs 1.7.3-14.el8.x86_64 unzip 6.0-43.el8.x86_64 usbredir 0.8.0-1.el8.x86_64 usermode 1.113-1.el8.x86_64 userspace-rcu 0.10.1-2.el8.x86_64 util-linux 2.32.1-24.el8.x86_64 vdo 6.2.3.114-14.el8.x86_64 vdsm 4.40.40-1.el8ev.x86_64 vdsm-api 4.40.40-1.el8ev.noarch vdsm-client 4.40.40-1.el8ev.noarch vdsm-common 4.40.40-1.el8ev.noarch vdsm-gluster 4.40.40-1.el8ev.x86_64 vdsm-hook-ethtool-options 4.40.40-1.el8ev.noarch vdsm-hook-fcoe 4.40.40-1.el8ev.noarch vdsm-hook-openstacknet 4.40.40-1.el8ev.noarch vdsm-hook-vhostmd 4.40.40-1.el8ev.noarch vdsm-hook-vmfex-dev 4.40.40-1.el8ev.noarch vdsm-http 4.40.40-1.el8ev.noarch vdsm-jsonrpc 4.40.40-1.el8ev.noarch vdsm-network 4.40.40-1.el8ev.x86_64 vdsm-python 4.40.40-1.el8ev.noarch vdsm-yajsonrpc 4.40.40-1.el8ev.noarch vhostmd 1.1-4.el8.x86_64 vim-minimal 8.0.1763-15.el8.x86_64 virt-install 2.2.1-3.el8.noarch virt-manager-common 2.2.1-3.el8.noarch virt-v2v 1.42.0-6.module+el8.3.0+7898+13f907d5.x86_64 virt-what 1.18-6.el8.x86_64 virt-who 0.29.3-1.el8.noarch volume_key-libs 0.3.11-5.el8.x86_64 which 2.21-12.el8.x86_64 xfsprogs 5.0.0-4.el8.x86_64 xkeyboard-config 2.28-1.el8.noarch xml-common 0.6.3-50.el8.noarch xmlrpc-c 1.51.0-5.el8.x86_64 xmlrpc-c-client 1.51.0-5.el8.x86_64 xz 5.2.4-3.el8.x86_64 xz-libs 5.2.4-3.el8.x86_64 yajl 2.1.0-10.el8.x86_64 yum 4.2.23-4.el8.noarch zlib 1.2.11-16.el8_2.x86_64 | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/package_manifest/ovirt-4.4.4 |
Chapter 3. BareMetalHost [metal3.io/v1alpha1] | Chapter 3. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BareMetalHostSpec defines the desired state of BareMetalHost. status object BareMetalHostStatus defines the observed state of BareMetalHost. 3.1.1. .spec Description BareMetalHostSpec defines the desired state of BareMetalHost. Type object Required online Property Type Description architecture string CPU architecture of the host, e.g. "x86_64" or "aarch64". If unset, eventually populated by inspection. automatedCleaningMode string When set to disabled, automated cleaning will be avoided during provisioning and deprovisioning. bmc object How do we connect to the BMC? bootMACAddress string Which MAC address will PXE boot? This is optional for some types, but required for libvirt VMs driven by vbmc. bootMode string Select the method of initializing the hardware during boot. Defaults to UEFI. consumerRef object ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". customDeploy object A custom deploy procedure. description string Description is a human-entered text used to help identify the host externallyProvisioned boolean ExternallyProvisioned means something else is managing the image running on the host and the operator should only manage the power status and hardware inventory inspection. If the Image field is filled in, this field is ignored. firmware object BIOS configuration for bare metal server hardwareProfile string What is the name of the hardware profile for this host? Hardware profiles are deprecated and should not be used. Use the separate fields Architecture and RootDeviceHints instead. Set to "empty" to prepare for the future version of the API without hardware profiles. image object Image holds the details of the image to be provisioned. metaData object MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. networkData object NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. online boolean Should the server be online? preprovisioningNetworkDataName string PreprovisioningNetworkDataName is the name of the Secret in the local namespace containing network configuration (e.g content of network_data.json) which is passed to the preprovisioning image, and to the Config Drive if not overridden by specifying NetworkData. raid object RAID configuration for bare metal server rootDeviceHints object Provide guidance about how to choose the device for the image being provisioned. taints array Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. userData object UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. 3.1.2. .spec.bmc Description How do we connect to the BMC? Type object Required address credentialsName Property Type Description address string Address holds the URL for accessing the controller on the network. credentialsName string The name of the secret containing the BMC credentials (requires keys "username" and "password"). disableCertificateVerification boolean DisableCertificateVerification disables verification of server certificates when using HTTPS to connect to the BMC. This is required when the server certificate is self-signed, but is insecure because it allows a man-in-the-middle to intercept the connection. 3.1.3. .spec.consumerRef Description ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.4. .spec.customDeploy Description A custom deploy procedure. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.5. .spec.firmware Description BIOS configuration for bare metal server Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.6. .spec.image Description Image holds the details of the image to be provisioned. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image, e.g md5, sha256 or sha512. The special value "auto" can be used to detect the algorithm from the checksum. If missing, MD5 is used. If in doubt, use "auto". format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.7. .spec.metaData Description MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.8. .spec.networkData Description NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.9. .spec.raid Description RAID configuration for bare metal server Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.10. .spec.rootDeviceHints Description Provide guidance about how to choose the device for the image being provisioned. Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.11. .spec.taints Description Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. Type array 3.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 3.1.13. .spec.userData Description UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.14. .status Description BareMetalHostStatus defines the observed state of BareMetalHost. Type object Required errorCount errorMessage hardwareProfile operationalStatus poweredOn provisioning Property Type Description errorCount integer ErrorCount records how many times the host has encoutered an error since the last successful operation errorMessage string the last error message reported by the provisioning subsystem errorType string ErrorType indicates the type of failure encountered when the OperationalStatus is OperationalStatusError goodCredentials object the last credentials we were able to validate as working hardware object The hardware discovered to exist on the host. hardwareProfile string The name of the profile matching the hardware details. lastUpdated string LastUpdated identifies when this status was last observed. operationHistory object OperationHistory holds information about operations performed on this host. operationalStatus string OperationalStatus holds the status of the host poweredOn boolean indicator for whether or not the host is powered on provisioning object Information tracked by the provisioner. triedCredentials object the last credentials we sent to the provisioning backend 3.1.15. .status.goodCredentials Description the last credentials we were able to validate as working Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.16. .status.goodCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.17. .status.hardware Description The hardware discovered to exist on the host. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 3.1.18. .status.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 3.1.19. .status.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 3.1.20. .status.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 3.1.21. .status.hardware.nics Description Type array 3.1.22. .status.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN. 3.1.23. .status.hardware.nics[].vlans Description The VLANs available Type array 3.1.24. .status.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN. Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 3.1.25. .status.hardware.storage Description Type array 3.1.26. .status.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description alternateNames array (string) A list of alternate Linux device names of the disk, e.g. "/dev/sda". Note that this list is not exhaustive, and names may not be stable across reboots. hctl string The SCSI location of the device model string Hardware model name string A Linux device name of the disk, e.g. "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". This will be a name that is stable across reboots if one is available. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 3.1.27. .status.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 3.1.28. .status.operationHistory Description OperationHistory holds information about operations performed on this host. Type object Property Type Description deprovision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. inspect object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. provision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. register object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. 3.1.29. .status.operationHistory.deprovision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.30. .status.operationHistory.inspect Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.31. .status.operationHistory.provision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.32. .status.operationHistory.register Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.33. .status.provisioning Description Information tracked by the provisioner. Type object Required ID state Property Type Description ID string The machine's UUID from the underlying provisioning tool bootMode string BootMode indicates the boot mode used to provision the node customDeploy object Custom deploy procedure applied to the host. firmware object The Bios set by the user image object Image holds the details of the last image successfully provisioned to the host. raid object The Raid set by the user rootDeviceHints object The RootDevicehints set by the user state string An indiciator for what the provisioner is doing with the host. 3.1.34. .status.provisioning.customDeploy Description Custom deploy procedure applied to the host. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.35. .status.provisioning.firmware Description The Bios set by the user Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.36. .status.provisioning.image Description Image holds the details of the last image successfully provisioned to the host. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image, e.g md5, sha256 or sha512. The special value "auto" can be used to detect the algorithm from the checksum. If missing, MD5 is used. If in doubt, use "auto". format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.37. .status.provisioning.raid Description The Raid set by the user Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.38. .status.provisioning.rootDeviceHints Description The RootDevicehints set by the user Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.39. .status.triedCredentials Description the last credentials we sent to the provisioning backend Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.40. .status.triedCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/baremetalhosts GET : list objects of kind BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts DELETE : delete collection of BareMetalHost GET : list objects of kind BareMetalHost POST : create a BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} DELETE : delete a BareMetalHost GET : read the specified BareMetalHost PATCH : partially update the specified BareMetalHost PUT : replace the specified BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status GET : read status of the specified BareMetalHost PATCH : partially update status of the specified BareMetalHost PUT : replace status of the specified BareMetalHost 3.2.1. /apis/metal3.io/v1alpha1/baremetalhosts HTTP method GET Description list objects of kind BareMetalHost Table 3.1. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty 3.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts HTTP method DELETE Description delete collection of BareMetalHost Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BareMetalHost Table 3.3. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty HTTP method POST Description create a BareMetalHost Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body BareMetalHost schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 202 - Accepted BareMetalHost schema 401 - Unauthorized Empty 3.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the BareMetalHost HTTP method DELETE Description delete a BareMetalHost Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BareMetalHost Table 3.10. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BareMetalHost Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BareMetalHost Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body BareMetalHost schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty 3.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status Table 3.16. Global path parameters Parameter Type Description name string name of the BareMetalHost HTTP method GET Description read status of the specified BareMetalHost Table 3.17. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BareMetalHost Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BareMetalHost Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. Body parameters Parameter Type Description body BareMetalHost schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/provisioning_apis/baremetalhost-metal3-io-v1alpha1 |
Chapter 23. How is core hour usage data calculated? | Chapter 23. How is core hour usage data calculated? The introduction of the new pay-as-you-go On-Demand subscription type in 2021 resulted in new types of units of measurement in the subscriptions service, in addition to the units of measurement for sockets or cores. These new units of measurement are compound units that function as derived units, where the unit of measurement is calculated from other base units. At this time, the newer derived units for the subscriptions service add a base unit of time, so these new units are measuring consumption over a period of time. Time base units can be combined with base units that are appropriate for specific products, resulting in derived units that meter a product according to the types of resources that it consumes. In addition, for a subset of those time-based units, usage data is derived from frequent, time-based sampling of data instead of direct counting. In part, the sampling method might be used for a particular product or service because of the required unit of measurement and the capabilities of the Red Hat OpenShift monitoring stack tools to gather usage data for that unit of measurement. When the subscriptions service tracks subscription usage with time-based metrics that also use sampling, the metrics used and the units of measurement applied to those metrics are based upon the terms for the subscriptions for these products. The following list shows examples of time-based metrics that also use sampling to gather usage data: Red Hat OpenShift Container Platform On-Demand usage is measured with a single derived unit of measurement of core hours. A core hour is a unit of measurement for computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. Red Hat OpenShift Dedicated On-Demand is measured with two units of measurement, both derived units of measurement. It is measured in core hours to track the workload usage on the compute machines, and in instance hours to track instance availability as the control plane usage on the control plane machines (formerly the master machines in older versions of Red Hat OpenShift). An instance hour is the availability of a Red Hat service instance, during which it can accept and execute customer workloads. For Red Hat OpenShift Dedicated On-Demand, instance hours are measured by summing the availability of all active clusters, in hours. Red Hat OpenShift AI (RHOAI) On-Demand usage and Red Hat Advanced Cluster Security for Kubernetes (RHACS) On-Demand usage are measured with a single derived unit of measurement of vCPU hours. A vCPU hour is a unit of measurement for cluster size on one virtual core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. 23.1. An example for Red Hat OpenShift On-Demand subscriptions The following information for Red Hat OpenShift On-Demand subscriptions includes an explanation of the applicable units of measurement, a detailed scenario that shows the steps that the subscriptions service and the other Hybrid Cloud Console and monitoring stack tools use to calculate core hour usage, and additional information that can help you understand how core hour usage is reported in the subscriptions service. You can use this information to help you understand the basic principles of how the subscriptions service calculates usage for the time-based units of measurement that also use sampling. 23.1.1. Units of measurement for Red Hat OpenShift On-Demand subscriptions The following table provides additional details about the derived units of measurement that are used for the Red Hat OpenShift On-Demand products. These details include the name and definition of the unit of measurement along with examples of usage that would equal one of that unit of measurement. In addition, a sample Prometheus query language (PromQL) query is provided for each unit. This example query is not the complete set of processes by which the subscriptions service calculates usage, but it is a query that you can run locally in a cluster to help you understand some of those processes. Table 23.1. Units of measurement for Red Hat OpenShift Container Platform On-Demand and Red Hat OpenShift Dedicated On-Demand Unit of measurement Definition Examples core hour Computational activity on one core (as defined by the subscription terms), for a total of one hour, measured to the granularity of the meter that is used. For Red Hat OpenShift Container Platform On-Demand and Red Hat OpenShift Dedicated On-Demand workload usage: A single core running for 1 hour. Many cores running in short time intervals to equal 1 hour. Core hour base PromQL query that you can run locally on your cluster: instance hours, in cluster hours The availability of a Red Hat service instance, during which it can accept and execute customer workloads. In a cluster hour context, for Red Hat OpenShift Dedicated On-Demand control plane usage: A single cluster that spawns pods and runs applications for 1 hour. Two clusters that spawn pods and run applications for 30 minutes. Instance hour base PromQL query that you can run locally on your cluster: 23.1.2. Example core hour usage calculation The following example describes the process for calculating core hour usage for a Red Hat OpenShift On-Demand subscription. You can use this example to help you understand other derived units of measurement where time is one of the base units of the usage calculation and sampling is used as part of the measurement. For example, the vCPU hour calculation for Red Hat OpenShift AI On-Demand is done in the same way, except that the measurement is for virtual cores. To obtain usage in core hours, the subscriptions service uses numerical integration. Numerical integration is also commonly known as an "area under the curve" calculation, where the area of a complex shape is calculated by using the area of a series of rectangles. The tools in the Red Hat OpenShift monitoring stack contain the Prometheus query language (PromQL) function sum_over_time , a function that aggregates data for a time interval. This function is the foundation of the core hours calculation in the subscriptions service. Note You can run this PromQL query locally in a cluster to show results that include the cluster size and a snapshot of usage. Every 2 minutes, a cluster reports its size in cores to the monitoring stack tools, including Telemetry. One of the Hybrid Cloud Console tools, the Tally engine, reviews this information every hour in 5 minute intervals. Because the cluster reports to the monitoring stack tools every 2 minutes, each 5 minute interval might contain up to three values for cluster size. The Tally engine selects the smallest cluster size value to represent the full 5 minute interval. The following example shows how a sample cluster size is collected every 2 minutes and how the smallest size is selected for the 5 minute interval. Figure 23.1. Calculating the cluster size Then, for each cluster, the Tally engine uses the selected value and creates a box of usage for each 5 minute interval. The area of the 5-minute box is 300 seconds times the height in cores. For every 5 minute box, this core seconds value is stored and eventually used to calculate the daily, account-wide aggregation of core hour usage. The following example shows a graphical representation of how an area under the curve is calculated, with cluster size and time used to create usage boxes, and the area of each box used as building blocks to create daily core hour usage totals. Figure 23.2. Calculating the core hours Every day, each 5 minute usage value is added to create the total usage of a cluster on that day. Then the totals for each cluster are combined to create daily usage information for all clusters in the account. In addition, the core seconds are converted to core hours. During the regular 24-hour update of the subscriptions service with the day's data, the core hour usage information for pay-as-you-go subscriptions is updated. In the subscriptions service, the daily core hour usage for the account is plotted on the usage and utilization graph, and additional core hours used information shows the accumulated total for the account. The current instances table also lists each cluster in the account and shows the cumulative number of core hours used in that cluster. Note The core hour usage data for the account and for individual clusters that is shown in the subscriptions service interface is rounded to two decimal places for display purposes. However, the data that is used for the subscriptions service calculations and that is provided to the Red Hat Marketplace billing service is at the millicore level, rounded to 6 decimal places. Every month, the monthly core hour usage total for your account is supplied to Red Hat Marketplace for invoice preparation and billing. For subscription types that are offered with a four-to-one relationship of core hour to vCPU hour, the core hour total from the subscriptions service is divided by 4 for the Red Hat Marketplace billing activities. For subscription types that are offered with a one-to-one relationship of core hour to vCPU hour, no conversion in the total is made. After the monthly total is sent to Red Hat Marketplace and the new month begins, the usage values for the subscriptions service display reset to 0 for the new current month. You can use filtering to view usage data for months for the span of one year. 23.1.3. Resolving questions about core hour usage If you have questions about core hour usage, first use the following steps as a diagnostic tool: In the subscriptions service, review the cumulative total for the month for each cluster in the current instances table. Look for any cluster that shows unusual usage, based on your understanding of how that cluster is configured and deployed. Note The current instances table displays a snapshot of the most recent monthly cumulative total for each cluster. Currently this information updates a few times per day. This value resets to 0 at the beginning of each month. Then review the daily core hour totals and trends in the usage and utilization graph. Look for any day that shows unusual usage. It is likely that unusual usage on a cluster that you found in the step corresponds to this day. From these initial troubleshooting steps, you might be able to find the cluster owner and discuss whether the unusual usage is due to an extremely high workload, problems with cluster configuration, or other issues. If you continue to have questions after using these steps, you can contact your Red Hat account team to help you understand your core hour usage. For questions about billing, use the support instructions for Red Hat Marketplace. | [
"sum_over_time((max by (_id) (cluster:usage:workload:capacity_physical_cpu_cores:min:5m))[1h:1s])",
"group(cluster:usage:workload:capacity_physical_cpu_cores:max:5m[1h:5m]) by (_id)",
"sum_over_time((max by (_id) (cluster:usage:workload:capacity_physical_cpu_cores:min:5m))[1h:1s])"
] | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/con-trbl-how-core-hour-usage-calculated_assembly-troubleshooting-common-questions-ctxt |
Security-Enhanced Linux | Security-Enhanced Linux Red Hat Enterprise Linux 6 User Guide Mirek Jahoda Red Hat Customer Content Services [email protected] Robert Kratky Red Hat Customer Content Services Barbora Ancincova Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/index |
Support | Support Red Hat OpenShift Service on AWS 4 Red Hat OpenShift Service on AWS Support. Red Hat OpenShift Documentation Team | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:openshift_network_operator_ipsec_state:info\"}'",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- gather_network_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting",
"oc adm must-gather --volume-percentage <storage_percentage>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"oc adm must-gather --dest-dir /tmp/captures \\// <.> --source-dir '/tmp/tcpdump/' \\// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\// <.> --node-selector 'node-role.kubernetes.io/worker' \\// <.> --host-network=true \\// <.> --timeout 30s \\// <.> -- tcpdump -i any \\// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300",
"tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3",
"toolbox",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8",
"oc describe clusterversion",
"Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8",
"rosa logs install --cluster=<cluster_name>",
"rosa logs install --cluster=<cluster_name> --watch",
"rosa logs uninstall --cluster=<cluster_name>",
"rosa logs uninstall --cluster=<cluster_name> --watch",
"rosa verify permissions",
"rosa verify quota",
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> -- ls -alh /var/log",
"total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name> <local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc <command> --loglevel <log_level>",
"oc whoami -t",
"sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6",
"Can't get tokens . Can't get access tokens .",
"E: Failed to create cluster: The sts_user_role is not linked to account '1oNl'. Please create a user role and link it to the account.",
"rosa list ocm-role",
"I: Fetching ocm roles ROLE NAME ROLE ARN LINKED ADMIN ManagedOpenShift-OCM-Role-1158 arn:aws:iam::2066:role/ManagedOpenShift-OCM-Role-1158 No No",
"rosa list user-role",
"I: Fetching user roles ROLE NAME ROLE ARN LINKED ManagedOpenShift-User.osdocs-Role arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role Yes",
"rosa create ocm-role",
"rosa create ocm-role --admin",
"I: Creating ocm role ? Role prefix: ManagedOpenShift 1 ? Enable admin capabilities for the OCM role (optional): No 2 ? Permissions boundary ARN (optional): 3 ? Role Path (optional): 4 ? Role creation mode: auto 5 I: Creating role using 'arn:aws:iam::<ARN>:user/<UserName>' ? Create the 'ManagedOpenShift-OCM-Role-182' role? Yes 6 I: Created role 'ManagedOpenShift-OCM-Role-182' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' I: Linking OCM role ? OCM Role ARN: arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182 7 ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' role with organization '<AWS ARN>'? Yes 8 I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' with organization account '<AWS ARN>'",
"rosa create user-role",
"I: Creating User role ? Role prefix: ManagedOpenShift 1 ? Permissions boundary ARN (optional): 2 ? Role Path (optional): 3 ? Role creation mode: auto 4 I: Creating ocm user role using 'arn:aws:iam::2066:user' ? Create the 'ManagedOpenShift-User.osdocs-Role' role? Yes 5 I: Created role 'ManagedOpenShift-User.osdocs-Role' with ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' I: Linking User role ? User Role ARN: arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role ? Link the 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' role with account '1AGE'? Yes 6 I: Successfully linked role ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' with account '1AGE'",
"rosa list ocm-role",
"rosa list user-role",
"rosa link ocm-role --role-arn <arn>",
"I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'",
"rosa link user-role --role-arn <arn>",
"I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'",
"rosa create --profile <aws_profile> ocm-role",
"rosa create --profile <aws_profile> user-role",
"rosa create --profile <aws_profile> account-roles",
"rosa describe cluster -c <my_cluster_name> --debug",
"Failed to create cluster: Unable to create cluster spec: Failed to get access keys for user 'osdCcsAdmin': NoSuchEntity: The user with name osdCcsAdmin cannot be found.",
"rosa init --delete",
"rosa init",
"Error: Error creating network Load Balancer: AccessDenied: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/ManagedOpenShift-Installer-Role/xxxxxxxxxxxxxxxxxxx is not authorized to perform: iam:CreateServiceLinkedRole on resource: arn:aws:iam::xxxxxxxxxxxx:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing\"",
"aws iam get-role --role-name \"AWSServiceRoleForElasticLoadBalancing\" || aws iam create-service-linked-role --aws-service-name \"elasticloadbalancing.amazonaws.com\"",
"Error deleting cluster CLUSTERS-MGMT-400: Failed to delete cluster <hash>: sts_user_role is not linked to your account. sts_ocm_role is linked to your organization <org number> which requires sts_user_role to be linked to your Red Hat account <account ID>.Please create a user role and link it to the account: User Account <account ID> is not authorized to perform STS cluster operations Operation ID: b0572d6e-fe54-499b-8c97-46bf6890011c",
"E: Failed to delete cluster <hash>: sts_user_role is not linked to your account. sts_ocm_role is linked to your organization <org_number> which requires sts_user_role to be linked to your Red Hat account <account_id>.Please create a user role and link it to the account: User Account <account ID> is not authorized to perform STS cluster operations",
"rosa create user-role",
"I: Successfully linked role ARN <user role ARN> with account <account ID>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/support/index |
Chapter 12. File System Check | Chapter 12. File System Check File systems may be checked for consistency, and optionally repaired, with file system-specific userspace tools. These tools are often referred to as fsck tools, where fsck is a shortened version of file system check . Note These file system checkers only guarantee metadata consistency across the file system; they have no awareness of the actual data contained within the file system and are not data recovery tools. File system inconsistencies can occur for various reasons, including but not limited to hardware errors, storage administration errors, and software bugs. Before modern metadata-journaling file systems became common, a file system check was required any time a system crashed or lost power. This was because a file system update could have been interrupted, leading to an inconsistent state. As a result, a file system check is traditionally run on each file system listed in /etc/fstab at boot-time. For journaling file systems, this is usually a very short operation, because the file system's metadata journaling ensures consistency even after a crash. However, there are times when a file system inconsistency or corruption may occur, even for journaling file systems. When this happens, the file system checker must be used to repair the file system. The following provides best practices and other useful information when performing this procedure. Important Red Hat does not recommend this unless the machine does not boot, the file system is extremely large, or the file system is on remote storage. It is possible to disable file system check at boot by setting the sixth field in /etc/fstab to 0 . 12.1. Best Practices for fsck Generally, running the file system check and repair tool can be expected to automatically repair at least some of the inconsistencies it finds. In some cases, severely damaged inodes or directories may be discarded if they cannot be repaired. Significant changes to the file system may occur. To ensure that unexpected or undesirable changes are not permanently made, perform the following precautionary steps: Dry run Most file system checkers have a mode of operation which checks but does not repair the file system. In this mode, the checker prints any errors that it finds and actions that it would have taken, without actually modifying the file system. Note Later phases of consistency checking may print extra errors as it discovers inconsistencies which would have been fixed in early phases if it were running in repair mode. Operate first on a file system image Most file systems support the creation of a metadata image , a sparse copy of the file system which contains only metadata. Because file system checkers operate only on metadata, such an image can be used to perform a dry run of an actual file system repair, to evaluate what changes would actually be made. If the changes are acceptable, the repair can then be performed on the file system itself. Note Severely damaged file systems may cause problems with metadata image creation. Save a file system image for support investigations A pre-repair file system metadata image can often be useful for support investigations if there is a possibility that the corruption was due to a software bug. Patterns of corruption present in the pre-repair image may aid in root-cause analysis. Operate only on unmounted file systems A file system repair must be run only on unmounted file systems. The tool must have sole access to the file system or further damage may result. Most file system tools enforce this requirement in repair mode, although some only support check-only mode on a mounted file system. If check-only mode is run on a mounted file system, it may find spurious errors that would not be found when run on an unmounted file system. Disk errors File system check tools cannot repair hardware problems. A file system must be fully readable and writable if repair is to operate successfully. If a file system was corrupted due to a hardware error, the file system must first be moved to a good disk, for example with the dd(8) utility. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-fsck |
Chapter 5. EgressIP [k8s.ovn.org/v1] | Chapter 5. EgressIP [k8s.ovn.org/v1] Description EgressIP is a CRD allowing the user to define a fixed source IP for all egress traffic originating from any pods which match the EgressIP resource according to its spec definition. Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressIP. status object Observed status of EgressIP. Read-only. 5.1.1. .spec Description Specification of the desired behavior of EgressIP. Type object Required egressIPs namespaceSelector Property Type Description egressIPs array (string) EgressIPs is the list of egress IP addresses requested. Can be IPv4 and/or IPv6. This field is mandatory. namespaceSelector object NamespaceSelector applies the egress IP only to the namespace(s) whose label matches this definition. This field is mandatory. podSelector object PodSelector applies the egress IP only to the pods whose label matches this definition. This field is optional, and in case it is not set: results in the egress IP being applied to all pods in the namespace(s) matched by the NamespaceSelector. In case it is set: is intersected with the NamespaceSelector, thus applying the egress IP to the pods (in the namespace(s) already matched by the NamespaceSelector) which match this pod selector. 5.1.2. .spec.namespaceSelector Description NamespaceSelector applies the egress IP only to the namespace(s) whose label matches this definition. This field is mandatory. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.3. .spec.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.4. .spec.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.5. .spec.podSelector Description PodSelector applies the egress IP only to the pods whose label matches this definition. This field is optional, and in case it is not set: results in the egress IP being applied to all pods in the namespace(s) matched by the NamespaceSelector. In case it is set: is intersected with the NamespaceSelector, thus applying the egress IP to the pods (in the namespace(s) already matched by the NamespaceSelector) which match this pod selector. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 5.1.6. .spec.podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 5.1.7. .spec.podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 5.1.8. .status Description Observed status of EgressIP. Read-only. Type object Required items Property Type Description items array The list of assigned egress IPs and their corresponding node assignment. items[] object The per node status, for those egress IPs who have been assigned. 5.1.9. .status.items Description The list of assigned egress IPs and their corresponding node assignment. Type array 5.1.10. .status.items[] Description The per node status, for those egress IPs who have been assigned. Type object Required egressIP node Property Type Description egressIP string Assigned egress IP node string Assigned node name 5.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressips DELETE : delete collection of EgressIP GET : list objects of kind EgressIP POST : create an EgressIP /apis/k8s.ovn.org/v1/egressips/{name} DELETE : delete an EgressIP GET : read the specified EgressIP PATCH : partially update the specified EgressIP PUT : replace the specified EgressIP 5.2.1. /apis/k8s.ovn.org/v1/egressips Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EgressIP Table 5.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressIP Table 5.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.5. HTTP responses HTTP code Reponse body 200 - OK EgressIPList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressIP Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body EgressIP schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 201 - Created EgressIP schema 202 - Accepted EgressIP schema 401 - Unauthorized Empty 5.2.2. /apis/k8s.ovn.org/v1/egressips/{name} Table 5.9. Global path parameters Parameter Type Description name string name of the EgressIP Table 5.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EgressIP Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.12. Body parameters Parameter Type Description body DeleteOptions schema Table 5.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressIP Table 5.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.15. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressIP Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressIP Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body EgressIP schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 201 - Created EgressIP schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/egressip-k8s-ovn-org-v1 |
Chapter 7. About Logging | Chapter 7. About Logging As a cluster administrator, you can deploy logging on an OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. You can forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. You can also visualize your log data in the OpenShift Container Platform web console, or the Kibana web console, depending on your deployed log storage solution. Note In OpenShift Container Platform 4.16, the Elasticsearch Operator is only supported for ServiceMesh, Tracing, and Kiali. This Operator is planned for removal from the OpenShift Operator Catalog in November 2025. The reason for removal is that the Elasticsearch Operator is no longer supported for log storage, and Kibana is no longer supported in OpenShift Container Platform 4.16 and later versions. For more information on lifecycle dates, see Platform Agnostic Operators . OpenShift Container Platform cluster administrators can deploy logging by using Operators. For information, see Installing logging . The Operators are responsible for deploying, upgrading, and maintaining logging. After the Operators are installed, you can create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support logging. You can also create a ClusterLogForwarder CR to specify which logs are collected, how they are transformed, and where they are forwarded to. Note Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in Forward audit logs to the log store . 7.1. Logging architecture The major components of the logging are: Collector The collector is a daemonset that deploys pods to each OpenShift Container Platform node. It collects log data from each node, transforms the data, and forwards it to configured outputs. You can use the Vector collector or the legacy Fluentd collector. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Log store The log store stores log data for analysis and is the default output for the log forwarder. You can use the default LokiStack log store, the legacy Elasticsearch log store, or forward logs to additional external log stores. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Visualization You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The OpenShift Container Platform web console UI is provided by enabling the OpenShift Container Platform console plugin. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. Logging collects container logs and node logs. These are categorized into types: Application logs Container logs generated by user applications running in the cluster, except infrastructure container applications. Infrastructure logs Container logs generated by infrastructure namespaces: openshift* , kube* , or default , as well as journald messages from nodes. Audit logs Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and logs from the auditd , kube-apiserver , openshift-apiserver services, as well as the ovn project if enabled. Additional resources Log visualization with the web console 7.2. About deploying logging Administrators can deploy the logging by using the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to install the logging Operators. The Operators are responsible for deploying, upgrading, and maintaining the logging. Administrators and application developers can view the logs of the projects for which they have view access. 7.2.1. Logging custom resources You can configure your logging deployment with custom resource (CR) YAML files implemented by each Operator. Red Hat OpenShift Logging Operator : ClusterLogging (CL) - After the Operators are installed, you create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support the logging. The ClusterLogging CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches the ClusterLogging CR and adjusts the logging deployment accordingly. ClusterLogForwarder (CLF) - Generates collector configuration to forward logs per user configuration. Loki Operator : LokiStack - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy. OpenShift Elasticsearch Operator : Note These CRs are generated and managed by the OpenShift Elasticsearch Operator. Manual changes cannot be made without being overwritten by the Operator. ElasticSearch - Configure and deploy an Elasticsearch instance as the default log store. Kibana - Configure and deploy Kibana instance to search, query and view logs. 7.2.2. About JSON OpenShift Container Platform Logging You can use JSON logging to configure the Log Forwarding API to parse JSON strings into a structured object. You can perform the following tasks: Parse JSON logs Configure JSON log data for Elasticsearch Forward JSON logs to the Elasticsearch log store 7.2.3. About collecting and storing Kubernetes events The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Container Platform Logging. You must manually deploy the Event Router. For information, see About collecting and storing Kubernetes events . 7.2.4. About troubleshooting OpenShift Container Platform Logging You can troubleshoot the logging issues by performing the following tasks: Viewing logging status Viewing the status of the log store Understanding logging alerts Collecting logging data for Red Hat Support Troubleshooting for critical alerts 7.2.5. About exporting fields The logging system exports fields. Exported fields are present in the log records and are available for searching from Elasticsearch and Kibana. For information, see About exporting fields . 7.2.6. About event routing The Event Router is a pod that watches OpenShift Container Platform events so they can be collected by logging. The Event Router collects events from all projects and writes them to STDOUT . Fluentd collects those events and forwards them into the OpenShift Container Platform Elasticsearch instance. Elasticsearch indexes the events to the infra index. You must manually deploy the Event Router. For information, see Collecting and storing Kubernetes events . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/cluster-logging |
Chapter 3. Loading configuration values from external sources | Chapter 3. Loading configuration values from external sources Use configuration provider plugins to load configuration data from external sources. The providers operate independently of AMQ Streams. You can use them to load configuration data for all Kafka components, including producers and consumers. Use them, for example, to provide the credentials for Kafka Connect connector configuration. OpenShift Configuration Provider The OpenShift Configuration Provider plugin loads configuration data from OpenShift secrets or config maps. Suppose you have a Secret object that's managed outside the Kafka namespace, or outside the Kafka cluster. The OpenShift Configuration Provider allows you to reference the values of the secret in your configuration without extracting the files. You just need to tell the provider what secret to use and provide access rights. The provider loads the data without needing to restart the Kafka component, even when using a new Secret or ConfigMap object. This capability avoids disruption when a Kafka Connect instance hosts multiple connectors. Environment Variables Configuration Provider The Environment Variables Configuration Provider plugin loads configuration data from environment variables. The values for the environment variables can be mapped from secrets or config maps. You can use the Environment Variables Configuration Provider, for example, to load certificates or JAAS configuration from environment variables mapped from OpenShift secrets. Note OpenShift Configuration Provider can't use mounted files. For example, it can't load values that need the location of a truststore or keystore. Instead, you can mount config maps or secrets into a Kafka Connect pod as environment variables or volumes. You can use the Environment Variables Configuration Provider to load values for environment variables. You add configuration using the externalConfiguration property in KafkaConnect.spec . You don't need to set up access rights with this approach. However, Kafka Connect will need a restart when using a new Secret or ConfigMap for a connector. This will cause disruption to all the Kafka Connect instance's connectors. 3.1. Loading configuration values from a config map This procedure shows how to use the OpenShift Configuration Provider plugin. In the procedure, an external ConfigMap object provides configuration properties for a connector. Prerequisites An OpenShift cluster is available. A Kafka cluster is running. The Cluster Operator is running. Procedure Create a ConfigMap or Secret that contains the configuration properties. In this example, a ConfigMap object named my-connector-configuration contains connector properties: Example ConfigMap with connector properties apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2 Specify the OpenShift Configuration Provider in the Kafka Connect configuration. The specification shown here can support loading values from secrets and config maps. Example Kafka Connect configuration to enable the OpenShift Configuration Provider apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: secrets,configmaps 1 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 2 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 3 # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 KubernetesSecretConfigProvider provides values from secrets. 3 KubernetesConfigMapConfigProvider provides values from config maps. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Create a role that permits access to the values in the external config map. Example role to access values from a config map apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-connector-configuration"] verbs: ["get"] # ... The rule gives the role permission to access the my-connector-configuration config map. Create a role binding to permit access to the namespace that contains the config map. Example role binding to access the namespace that contains the config map apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io # ... The role binding gives the role permission to access the my-project namespace. The service account must be the same one used by the Kafka Connect deployment. The service account name format is <cluster_name> -connect, where <cluster_name> is the name of the KafkaConnect custom resource. Reference the config map in the connector configuration. Example connector configuration referencing the config map apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{configmaps:my-project/my-connector-configuration:option1} # ... # ... Placeholders for the property values in the config map are referenced in the connector configuration. The placeholder structure is configmaps: <path_and_file_name> : <property> . KubernetesConfigMapConfigProvider reads and extracts the option1 property value from the external config map. 3.2. Loading configuration values from environment variables This procedure shows how to use the Environment Variables Configuration Provider plugin. In the procedure, environment variables provide configuration properties for a connector. A database password is specified as an environment variable. Prerequisites An OpenShift cluster is available. A Kafka cluster is running. The Cluster Operator is running. Procedure Specify the Environment Variables Configuration Provider in the Kafka Connect configuration. Define environment variables using the externalConfiguration property . Example Kafka Connect configuration to enable the Environment Variables Configuration Provider apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env 1 config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider 2 # ... externalConfiguration: env: - name: DB_PASSWORD 3 valueFrom: secretKeyRef: name: db-creds 4 key: dbPassword 5 # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 EnvVarConfigProvider provides values from environment variables. 3 The DB_PASSWORD environment variable takes a password value from a secret. 4 The name of the secret containing the predefined password. 5 The key for the password stored inside the secret. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the environment variable in the connector configuration. Example connector configuration referencing the environment variable apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{env:DB_PASSWORD} # ... # ... | [
"apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: secrets,configmaps 1 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 2 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 3 #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"my-connector-configuration\"] verbs: [\"get\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{configmaps:my-project/my-connector-configuration:option1} #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env 1 config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider 2 # externalConfiguration: env: - name: DB_PASSWORD 3 valueFrom: secretKeyRef: name: db-creds 4 key: dbPassword 5 #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{env:DB_PASSWORD} #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/configuring_amq_streams_on_openshift/assembly-loading-config-with-providers-str |
Chapter 12. Kernel | Chapter 12. Kernel Kernel version in RHEL 7.4 Red Hat Enterprise Linux 7.4 is distributed with the kernel version 3.10.0-693. (BZ#1801759) The NVMe driver rebased to kernel version 4.10 The NVM-Express kernel driver has been updated to upstream kernel version 4.10, which provides a number of bug fixes and enhancements over the version. The most notable change is: the initial NVMe-over-Fabrics transport implementation, which uses existing RDMA NICs (Infiniband, RoCE, iWARP) and existing NVMe SSDs, has been added to the driver, but does not include support for DIF/DIX and multipathing. (BZ#1383834) crash rebased to version 7.1.9 With this update, the crash packages have been upgraded to upstream version 7.1.9, which provides a number of bug fixes and enhancements over the version. (BZ# 1393534 ) crash now analyzes vmcore dumps for IBM Power ISA 3.0 The crash utility has been updated to correspond with changes in the kernel page table related to IBM Power ISA version 3.0 architecture. As a result, the crash utility is now able to analyze vmcore dumps of kernels on IBM Power ISA 3.0 systems. (BZ#1368711) crash updated for IBM Power and for the little-endian variant of IBM Power The crash packages have been updated to support IBM Power Systems and the little-endian variant of IBM Power Systems. These packages provide the core analysis suite, which is a self-contained tool that can be used to investigate live systems, as well as kernel core dumps created by the kexec-tools packages or the Red Hat Enterprise Linux kernel. (BZ#1384944) memkind updated to version 1.3.0 The memkind library has been updated to version 1.3.0, which provides several bug fixes and enhancements over the version. Notable changes include: A logging mechanism has been introduced. Hardware Locality (hwloc) has been integrated, and can be turned on using the --with-hwloc option. The symbols exposed by libmemkind.so have been cleaned up. For example, libnuma and jemalloc are no longer exposed. AutoHBW files have been moved to to the /memkind/autohbw/ directory, code has been refactored and tests have been added to appropriate scenarios. Flags improving security have been added to memkind . The flags can be turned off with the --disable-secure configure time option. The configuration of jemalloc has been changed to turn off unused features. Several symbols have been deprecated. For details, see the Deprecated Functionality part. (BZ#1384549) Jitter Entropy RNG added to the kernel This update adds the Jitter Entropy Random Number Generator (RNG), which collects entropy through CPU timing differences to the Linux kernel. This RNG is by default available through the algif_rng interface. The generated numbers can be added back to the kernel through the /dev/random file, which makes these numbers available to other /dev/random users. As a result, the operating system now has more sources of entropy available. (BZ#1270982) /dev/random now shows notifications and warnings for the urandom pool initialization With this update, the random driver (/dev/random), has been modified to print a message when the nonblocking pool (used by /dev/urandom) is initialized. (BZ#1298643) fjes updated to version 1.2 The fjes driver has been updated to version 1.2, which includes a number of bug fixes and enhancements over the version. (BZ#1388716) Full support for user name spaces User name spaces (userns) that were introduced in Red Hat Enterprise Linux 7.2 as Technology Preview are now fully supported. This feature provides additional security to servers running Linux containers by improving isolation between the host and the containers. Administrators of containers are no longer able to perform administrative operations on the host, which increases security. The default value of user.max_user_namespaces is 0 . You can set it to a non-zero value, which stops the applications that malfunction. It is recommended that user.max_usernamespaces is set to a large value, such as 15000 , so that the value does not need to be revisited in the normal course of operation. (BZ#1340238) makedumpfile updated to version 1.6.1 The makedumpfile package has been upgraded to upstream version 1.6.1 as part of the kexec-tools 2.0.14 rpm, which provides a number of bug fixes and enhancements over the version. (BZ#1384945) qat updated to the latest upstream version The qat driver has been updated to the latest upstream version, which provides a number of bug fixes and enhancements over the version. Notable bug fixes and enhancements: Added support for the Diffie-Hellman (DH) software Added support for Elliptic Curve Diffie-Hellman (ECDH) software Added support for Error-correcting Code (ECC) software for curve P-192 and P-256 (BZ#1382849) Addition of intel-cmt-cat package The pqos utility provided in this package enables administrators to monitor and manipulate L3 cache to improve utility and performance. The tool bypasses the kernel API and operates on the hardware directly, this requires that CPU pinning is in use with the target process before use. (BZ#1315489) i40e now supports trusted and untrusted VFs This update adds support for both trusted and untrusted virtual functions into the i40e NIC driver. (BZ#1384456) Kernel support for OVS 802.1ad (QinQ) This update provides the ability to use two VLAN tags with Open vSwitch (OVS) by enabling the 802.1ad (QinQ) networking standard in kernel. Note that the user-space part of this update is provided by the openvswitch package. (BZ#1155732) Live post-copy migration support for shared memory and hugetlbfs This update enhances the kernel to enable live post-copy migration to support shared memory and the hugetlbfs file system. To benefit from this feature: Configure 2MiB huge pages on a host, Create a guest VM with 2MiB huge pages, Run the guest VM and a stress-test application to test the memory, Live-migrate the guest VM with post-copy. (BZ#1373606) New package: dbxtool The dbxtool package provides a command-line utility and a one-shot systemd service for applying UEFI Secure Boot DBX updates. (BZ#1078990) mlx5 now supports SRIOV-trusted VFs This update adds support of Single Root I/O Virtualization (SRIOV)-trusted virtual functions (VFs) to the mlx5 driver. (BZ#1383280) rwsem performance updates from the 4.9 kernel backported With this update, most upstream R/W semaphores ( rwsem ) performance related changes up to the Linux kernel version 4.9 have been backported into the Linux kernel while maintaining kernel Application Binary Interface (kABI). Notable changes include: Writer-optimistic spinning, which reduces locking latency and improves locking performance. Lock-less waiter wakeup without holding internal spinlock. (BZ#1416924) getrandom added to the Linux kernel This update adds the getrandom system call to the Linux kernel. As a result, the user space can now request randomness from the same non-blocking entropy pool used by /dev/urandom, and the user space can block until at least 128 bits of entropy has been accumulated in that pool. (BZ#1432218) A new status line, Umask, has been included in /proc/<PID>/status Previously, it was not possible to read the process umask without modification. Without this change, a library cannot read the umask safely, especially if the main program is multithreaded. The proc filesystem (procfs) now exposes the umask in the /proc/<PID>/status file. The format is Umask: OOOO , where OOOO is the octal representation of the umask of the task. (BZ#1391413) Intel(R) Omni-Path Architecture (OPA) host software Intel(R) Omni-Path Architecture (OPA) host software has been fully supported since Red Hat Enterprise Linux 7.3. Intel(R) OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on how to obtain Intel(R) Omni-Path Architecture documentation, see https://access.redhat.com/articles/2039623 . (BZ#1459948) The XTS-AES key verification now meets the FIPS 140-2 requirements With this update, while running Red Hat Enterprise Linux in FIPS mode and using kernel XTS-AES key verification, the AES key is forced to be different from the tweak key. This ensures that the FIPS 140-2 IG A.9 requirements are met. Additionally, the XEX-based tweaked-codebook mode with ciphertext stealing (XTS) test vectors now could be marked to be skipped. (BZ#1314179) mlx5 is now supported on IBM z Systems The Mellanox mlx5 device driver is now also supported for Linux on IBM z Systems and can be used for Ethernet TCP/IP network. (BZ#1394197) The perf tool now supports processor cache-line contention detection The perf tool now provides the c2c subcommand for Shared Data Cache-to-Cache (C2C) analysis. This enables you to inspect cache-line contention and detect both true sharing and false sharing. Contention occurs when a processor core on a Symmetric Multi Processing (SMP) system modifies data items on the same cache line that is in use by other processors. All other processors using this cache line must then invalidate their copy and request an updated one, which can lead to degraded performance. The new c2c subcommand provides detailed information about the cache lines where contention has been detected, the processes reading and writing the data, the instructions causing the contention, and the Non-Uniform Memory Access (NUMA) nodes involved. (BZ#1391243) SCSI-MQ support in the lpfc driver The lpfc driver updated in Red Hat Enterprise Linux 7.4 can now enable the use of SCSI-MQ (multiqueue) with the lpfc_use_blk_mq=1 module parameter. The default value is 0 (disabled). Note that a recent performance testing at Red Hat with async IO over Fibre Channel adapters using SCSI-MQ has shown significant performance degradation under certain conditions. A fix is being tested but was not ready in time for Red Hat Enterprise Linux 7.4 General Availability. (BZ#1382101) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_kernel |
13.2. Using SR-IOV | 13.2. Using SR-IOV This section covers the use of PCI passthrough to assign a Virtual Function of an SR-IOV capable multiport network card to a virtual machine as a network device. SR-IOV Virtual Functions (VFs) can be assigned to virtual machines by adding a device entry in <hostdev> with the virsh edit or virsh attach-device command. However, this can be problematic because unlike a regular network device, an SR-IOV VF network device does not have a permanent unique MAC address, and is assigned a new MAC address each time the host is rebooted. Because of this, even if the guest is assigned the same VF after a reboot, when the host is rebooted the guest determines its network adapter to have a new MAC address. As a result, the guest believes there is new hardware connected each time, and will usually require re-configuration of the guest's network settings. libvirt-0.9.10 and later contains the <interface type='hostdev'> interface device. Using this interface device, libvirt will first perform any network-specific hardware/switch initialization indicated (such as setting the MAC address, VLAN tag, or 802.1Qbh virtualport parameters), then perform the PCI device assignment to the guest. Using the <interface type='hostdev'> interface device requires: an SR-IOV-capable network card, host hardware that supports either the Intel VT-d or the AMD IOMMU extensions, and the PCI address of the VF to be assigned. For a list of network interface cards (NICs) with SR-IOV support, see https://access.redhat.com/articles/1390483 . Important Assignment of an SR-IOV device to a virtual machine requires that the host hardware supports the Intel VT-d or the AMD IOMMU specification. To attach an SR-IOV network device on an Intel or an AMD system, follow this procedure: Procedure 13.1. Attach an SR-IOV network device on an Intel or AMD system Enable Intel VT-d or the AMD IOMMU specifications in the BIOS and kernel On an Intel system, enable Intel VT-d in the BIOS if it is not enabled already. Refer to Procedure 12.1, "Preparing an Intel system for PCI device assignment" for procedural help on enabling Intel VT-d in the BIOS and kernel. Skip this step if Intel VT-d is already enabled and working. On an AMD system, enable the AMD IOMMU specifications in the BIOS if they are not enabled already. Refer to Procedure 12.2, "Preparing an AMD system for PCI device assignment" for procedural help on enabling IOMMU in the BIOS. Verify support Verify if the PCI device with SR-IOV capabilities is detected. This example lists an Intel 82576 network interface card which supports SR-IOV. Use the lspci command to verify whether the device was detected. Note that the output has been modified to remove all other devices. Start the SR-IOV kernel modules If the device is supported the driver kernel module should be loaded automatically by the kernel. Optional parameters can be passed to the module using the modprobe command. The Intel 82576 network interface card uses the igb driver kernel module. Activate Virtual Functions The max_vfs parameter of the igb module allocates the maximum number of Virtual Functions. The max_vfs parameter causes the driver to spawn, up to the value of the parameter in, Virtual Functions. For this particular card the valid range is 0 to 7 . Remove the module to change the variable. Restart the module with the max_vfs set to 7 or any number of Virtual Functions up to the maximum supported by your device. Make the Virtual Functions persistent Add the line options igb max_vfs=7 to any file in /etc/modprobe.d to make the Virtual Functions persistent. For example: Inspect the new Virtual Functions Using the lspci command, list the newly added Virtual Functions attached to the Intel 82576 network device. (Alternatively, use grep to search for Virtual Function , to search for devices that support Virtual Functions.) The identifier for the PCI device is found with the -n parameter of the lspci command. The Physical Functions correspond to 0b:00.0 and 0b:00.1 . All Virtual Functions have Virtual Function in the description. Verify devices exist with virsh The libvirt service must recognize the device before adding a device to a virtual machine. libvirt uses a similar notation to the lspci output. All punctuation characters, ; and . , in lspci output are changed to underscores ( _ ). Use the virsh nodedev-list command and the grep command to filter the Intel 82576 network device from the list of available host devices. 0b is the filter for the Intel 82576 network devices in this example. This may vary for your system and may result in additional devices. The serial numbers for the Virtual Functions and Physical Functions should be in the list. Get device details with virsh The pci_0000_0b_00_0 is one of the Physical Functions and pci_0000_0b_10_0 is the first corresponding Virtual Function for that Physical Function. Use the virsh nodedev-dumpxml command to get advanced output for both devices. This example adds the Virtual Function pci_0000_0b_10_0 to the virtual machine in Step 9 . Note the bus , slot and function parameters of the Virtual Function: these are required for adding the device. Copy these parameters into a temporary XML file, such as /tmp/new-interface.xml for example. <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> </interface> Note If you do not specify a MAC address, one will be automatically generated. The <virtualport> element is only used when connecting to an 802.11Qbh hardware switch. The <vlan> element is new for Red Hat Enterprise Linux 6.4 and this will transparently put the guest's device on the VLAN tagged 42 . When the virtual machine starts, it should see a network device of the type provided by the physical adapter, with the configured MAC address. This MAC address will remain unchanged across host and guest reboots. The following <interface> example shows the syntax for the optional <mac address> , <virtualport> , and <vlan> elements. In practice, use either the <vlan> or <virtualport> element, not both simultaneously as shown in the example: ... <devices> ... <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> <mac address='52:54:00:6d:90:02'> <vlan> <tag id='42'/> </vlan> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> ... </devices> Add the Virtual Function to the virtual machine Add the Virtual Function to the virtual machine using the following command with the temporary file created in the step. This attaches the new device immediately and saves it for subsequent guest restarts. Using the --config option ensures the new device is available after future guest restarts. The virtual machine detects a new network interface card. This new card is the Virtual Function of the SR-IOV device. | [
"lspci 03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 03:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)",
"modprobe igb [<option>=<VAL1>,<VAL2>,] lsmod |grep igb igb 87592 0 dca 6708 1 igb",
"modprobe -r igb",
"modprobe igb max_vfs=7",
"echo \"options igb max_vfs=7\" >>/etc/modprobe.d/igb.conf",
"lspci | grep 82576 0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection(rev 01) 0b:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 0b:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)",
"virsh nodedev-list | grep 0b pci_0000_0b_00_0 pci_0000_0b_00_1 pci_0000_0b_10_0 pci_0000_0b_10_1 pci_0000_0b_10_2 pci_0000_0b_10_3 pci_0000_0b_10_4 pci_0000_0b_10_5 pci_0000_0b_10_6 pci_0000_0b_11_7 pci_0000_0b_11_1 pci_0000_0b_11_2 pci_0000_0b_11_3 pci_0000_0b_11_4 pci_0000_0b_11_5",
"virsh nodedev-dumpxml pci_0000_0b_00_0 <device> <name>pci_0000_0b_00_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igb</name> </driver> <capability type='pci'> <domain>0</domain> <bus>11</bus> <slot>0</slot> <function>0</function> <product id='0x10c9'>82576 Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> </capability> </device>",
"virsh nodedev-dumpxml pci_0000_0b_10_0 <device> <name>pci_0000_0b_10_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igbvf</name> </driver> <capability type='pci'> <domain>0</domain> <bus>11</bus> <slot>16</slot> <function>0</function> <product id='0x10ca'>82576 Virtual Function</product> <vendor id='0x8086'>Intel Corporation</vendor> </capability> </device>",
"<interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> </interface>",
"<devices> <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='11' slot='16' function='0'/> </source> <mac address='52:54:00:6d:90:02'> <vlan> <tag id='42'/> </vlan> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>",
"virsh attach-device MyGuest /tmp/new-interface.xml --config"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-SR_IOV-How_SR_IOV_Libvirt_Works |
probe::vm.munmap | probe::vm.munmap Name probe::vm.munmap - Fires when an munmap is requested Synopsis vm.munmap Values length the length of the memory segment address the requested address name name of the probe point Context The process calling munmap. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-munmap |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/reference_architecture_for_deploying_red_hat_openshift_container_platform_on_red_hat_openstack_platform/making-open-source-more-inclusive |
Embedding Data Grid in Java Applications | Embedding Data Grid in Java Applications Red Hat Data Grid 8.5 Create embedded caches with Data Grid Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/index |
1.3. Security Threats | 1.3. Security Threats To plan and implement a good security strategy, first be aware of some of the issues which determined, motivated attackers exploit to compromise systems. 1.3.1. Threats to Network Security Bad practices when configuring the following aspects of a network can increase the risk of attack. 1.3.1.1. Insecure Architectures A misconfigured network is a primary entry point for unauthorized users. Leaving a trust-based, open local network vulnerable to the highly-insecure Internet is much like leaving a door ajar in a crime-ridden neighborhood - nothing may happen for an arbitrary amount of time, but eventually someone exploits the opportunity. 1.3.1.1.1. Broadcast Networks System administrators often fail to realize the importance of networking hardware in their security schemes. Simple hardware such as hubs and routers rely on the broadcast or non-switched principle; that is, whenever a node transmits data across the network to a recipient node, the hub or router sends a broadcast of the data packets until the recipient node receives and processes the data. This method is the most vulnerable to address resolution protocol ( ARP ) or media access control ( MAC ) address spoofing by both outside intruders and unauthorized users on local hosts. 1.3.1.1.2. Centralized Servers Another potential networking pitfall is the use of centralized computing. A common cost-cutting measure for many businesses is to consolidate all services to a single powerful machine. This can be convenient as it is easier to manage and costs considerably less than multiple-server configurations. However, a centralized server introduces a single point of failure on the network. If the central server is compromised, it may render the network completely useless or worse, prone to data manipulation or theft. In these situations, a central server becomes an open door which allows access to the entire network. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-security_threats |
probe::nfs.fop.sendfile | probe::nfs.fop.sendfile Name probe::nfs.fop.sendfile - NFS client send file operation Synopsis nfs.fop.sendfile Values cache_valid cache related bit mask flag ppos current position of file count read bytes dev device identifier attrtimeo how long the cached information is assumed to be valid. We need to revalidate the cached attrs for this inode if jiffies - read_cache_jiffies > attrtimeo. ino inode number cache_time when we started read-caching this inode | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-fop-sendfile |
4.5.2. Configuring Fence Daemon Properties | 4.5.2. Configuring Fence Daemon Properties Clicking on the Fence Daemon tab displays the Fence Daemon Properties page, which provides an interface for configuring Post Fail Delay and Post Join Delay . The values you configure for these parameters are general fencing properties for the cluster. To configure specific fence devices for the nodes of the cluster, use the Fence Devices menu item of the cluster display, as described in Section 4.6, "Configuring Fence Devices" . The Post Fail Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node (a member of the fence domain) after the node has failed. The Post Fail Delay default value is 0 . Its value may be varied to suit cluster and network performance. The Post Join Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node after the node joins the fence domain. luci sets the Post Join Delay value to 6 . A typical setting for Post Join Delay is between 20 and 30 seconds, but can vary according to cluster and network performance. Enter the values required and click Apply for changes to take effect. Note For more information about Post Join Delay and Post Fail Delay , see the fenced (8) man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-fencedaemon-conga-ca |
REST API Guide | REST API Guide Red Hat Virtualization 4.3 Using the Red Hat Virtualization REST Application Programming Interface Red Hat Virtualization Documentation Team [email protected] Abstract This guide describes the Red Hat Virtualization Manager Representational State Transfer Application Programming Interface. This guide is generated from documentation comments in the ovirt-engine-api-model code, and is currently partially complete. Updated versions of this documentation will be published as new content becomes available. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/rest_api_guide/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/proc_providing-feedback-on-red-hat-documentation_considerations-in-adopting-rhel-8 |
Chapter 150. StrimziPodSetStatus schema reference | Chapter 150. StrimziPodSetStatus schema reference Used in: StrimziPodSet Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer pods Number of pods managed by this StrimziPodSet resource. integer readyPods Number of pods managed by this StrimziPodSet resource that are ready. integer currentPods Number of pods managed by this StrimziPodSet resource that have the current revision. integer | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-StrimziPodSetStatus-reference |
Chapter 2. Installation | Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.7 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux subscriptions listed in the Knowledgebase article How to use Red Hat Software Collections (RHSCL) or Red Hat Developer Toolset (DTS)? . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional repository, which are listed in Section 2.1.2, "Packages from the Optional Repository" , cannot be installed from the ISO image. Note Packages that require the Optional repository cannot be installed from the ISO image. A list of packages that require enabling of the Optional repository is provided in Section 2.1.2, "Packages from the Optional Repository" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Repository Some of the Red Hat Software Collections packages require the Optional repository to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this repository, see the relevant Knowledgebase article How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Management (RHSM)? . Packages from Software Collections for Red Hat Enterprise Linux that require the Optional repository to be enabled are listed in the tables below. Note that packages from the Optional repository are unsupported. For details, see the Knowledgebase article Support policy of the optional and supplementary channels in Red Hat Enterprise Linux . Table 2.1. Packages That Require Enabling of the Optional Repository in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Repository devtoolset-10-build scl-utils-build devtoolset-10-dyninst-testsuite glibc-static devtoolset-10-elfutils-debuginfod bsdtar devtoolset-10-gcc-plugin-devel libmpc-devel devtoolset-10-gdb source-highlight devtoolset-9-build scl-utils-build devtoolset-9-dyninst-testsuite glibc-static devtoolset-9-gcc-plugin-devel libmpc-devel devtoolset-9-gdb source-highlight httpd24-mod_ldap apr-util-ldap httpd24-mod_session apr-util-openssl python27-python-debug tix python27-python-devel scl-utils-build python27-tkinter tix rh-git227-git-cvs cvsps rh-git227-git-svn perl-Git-SVN, subversion rh-git227-perl-Git-SVN subversion-perl rh-java-common-ant-apache-bsf rhino rh-java-common-batik rhino rh-maven35-build scl-utils-build rh-maven35-xpp3-javadoc java-1.8.0-openjdk-javadoc-zip, java-11-openjdk-javadoc, java-1.7.0-openjdk-javadoc, java-11-openjdk-javadoc-zip, java-1.8.0-openjdk-javadoc rh-php73-php-devel pcre2-devel rh-php73-php-pspell aspell rh-python38-python-devel scl-utils-build 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.7 requires the removal of any earlier pre-release versions. If you have installed any version of Red Hat Software Collections 2.1 component, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections. The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install rh-php73 and rh-mariadb105 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl530-perl-CPAN and rh-perl530-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby27-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see How can I download or install debuginfo packages for RHEL systems? . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide . | [
"rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms",
"~]# yum install rh-php73 rh-mariadb105",
"~]# yum install rh-perl530-perl-CPAN rh-perl530-perl-Archive-Tar",
"~]# debuginfo-install rh-ruby27-ruby"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.7_release_notes/chap-installation |
Chapter 10. Monitoring hosts by using Red Hat Insights | Chapter 10. Monitoring hosts by using Red Hat Insights You can use Insights to diagnose systems and downtime related to security exploits, performance degradation, and stability failures. You can use the Insights dashboard to quickly identify key risks to stability, security, and performance. You can sort by category, view details of the impact and resolution, and then determine what systems are affected. To use Insights to monitor hosts that you manage with Satellite, you must first install Insights on your hosts and register your hosts with Insights. For new Satellite hosts, you can install and configure Insights during host registration to Satellite. For more information, see Section 4.3, "Registering hosts by using global registration" . For hosts already registered to Satellite, you can install and configure Insights on your hosts by using an Ansible role. For more information, see Section 10.3, "Deploying Red Hat Insights by using the Ansible role" . Additional information To view the logs for all plugins, go to /var/log/foreman/production.log . If you have problems connecting to Insights, ensure that your certificates are up-to-date. Refresh your subscription manifest to update your certificates. You can change the default schedule for running insights-client by configuring insights-client.timer on a host. For more information, see Changing the insights-client schedule in the Client Configuration Guide for Red Hat Insights . 10.1. Access to information from Insights in Satellite You can access the additional information available for hosts from Red Hat Insights in the following places in the Satellite web UI: Navigate to Configure > Insights where the vertical ellipsis to the Remediate button provides a View in Red Hat Insights link to the general recommendations page. On each recommendation line, the vertical ellipsis provides a View in Red Hat Insights link to the recommendation rule, and, if one is available for that recommendation, a Knowledgebase article link. For additional information, navigate to Hosts > All Hosts . If the host has recommendations listed, click on the number of recommendations. On the Insights tab, the vertical ellipsis to the Remediate button provides a Go To Satellite Insights page link to information for the system, and a View in Red Hat Insights link to host details on the console. 10.2. Excluding hosts from rh-cloud and insights-client reports You can set the host_registration_insights parameter to False to omit rh-cloud and insights-client reports. Satellite will exclude the hosts from rh-cloud reports and block insights-client from uploading a report to the cloud. Procedure In the Satellite web UI, navigate to Host > All Hosts . Select any host for which you want to change the value. On the Parameters tab, click on the edit button of host_registration_insights . Set the value to False . If you set the parameter to false on a host that is already reported on the Red Hat Hybrid Cloud Console , it will be still removed automatically from the inventory. However, this process can take some time to complete. Additional resources You can set this parameter at any level. For more information, see Host parameter hierarchy in Provisioning hosts . 10.3. Deploying Red Hat Insights by using the Ansible role The RedHatInsights.insights-client Ansible role is used to automate the installation and registration of hosts with Insights. For more information about adding this role to your Satellite, see Getting Started with Ansible in Satellite in Managing configurations by using Ansible integration . Procedure Add the RedHatInsights.insights-client role to the hosts. For new hosts, see Section 2.1, "Creating a host in Red Hat Satellite" . For existing hosts, see Using Ansible Roles to Automate Repetitive Tasks on Clients in Managing configurations by using Ansible integration . To run the RedHatInsights.insights-client role on your host, navigate to Hosts > All Hosts and click the name of the host that you want to use. On the host details page, expand the Schedule a job dropdown menu. Click Run Ansible roles . 10.4. Configuring synchronization of Insights recommendations for hosts You can enable automatic synchronization of the recommendations from Red Hat Hybrid Cloud Console that occurs daily by default. If you leave the setting disabled, you can synchronize the recommendations manually. Procedures To get the recommendations automatically: In the Satellite web UI, navigate to Configure > Insights . Enable Sync Automatically . To get the recommendations manually: In the Satellite web UI, navigate to Configure > Insights . On the vertical ellipsis, click Sync Recommendations . 10.5. Configuring automatic removal of hosts from the Insights Inventory When hosts are removed from Satellite, they can also be removed from the inventory of Red Hat Insights, either automatically or manually. You can configure automatic removal of hosts from the Insights Inventory during Red Hat Hybrid Cloud Console synchronization with Satellite that occurs daily by default. If you leave the setting disabled, you can still remove the bulk of hosts from the Inventory manually. Prerequisites Your user account must have the permission of view_foreman_rh_cloud to view the Inventory Upload page in Satellite web UI. Procedure In the Satellite web UI, navigate to Configure > Inventory Upload . Enable the Automatic Mismatch Deletion setting. 10.6. Creating an Insights remediation plan for hosts With Satellite, you can create a Red Hat Insights remediation plan and run the plan on Satellite hosts. Procedure In the Satellite web UI, navigate to Configure > Insights . On the Red Hat Insights page, select the number of recommendations that you want to include in an Insights plan. You can only select the recommendations that have an associated playbook. Click Remediate . In the Remediation Summary window, you can select the Resolutions to apply. Use the Filter field to search for specific keywords. Click Remediate . In the Job Invocation page, do not change the contents of precompleted fields. Optional. For more advanced configuration of the Remote Execution Job, click Show Advanced Fields . Select the Type of query you require. Select the Schedule you require. Click Submit . Alternatively: In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. On the Host details page, click Recommendations . On the Red Hat Insights page, select the number of recommendations you want to include in an Insights plan and proceed as before. In the Jobs window, you can view the progress of your plan. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/monitoring-hosts-by-using-red-hat-insights |
20.43. Setting Schedule Parameters | 20.43. Setting Schedule Parameters The virsh schedinfo command modifies host scheduling parameters of the virtual machine process on the host machine. The following command format should be used: Each parameter is explained below: domain - the guest virtual machine domain --set - the string placed here is the controller or action that is to be called. The string uses the parameter = value format. Additional parameters or values if required should be added as well. --current - when used with --set , will use the specified set string as the current scheduler information. When used without will display the current scheduler information. --config - - when used with --set , will use the specified set string on the reboot. When used without will display the scheduler information that is saved in the configuration file. --live - when used with --set , will use the specified set string on a guest virtual machine that is currently running. When used without will display the configuration setting currently used by the running virtual machine The scheduler can be set with any of the following parameters: cpu_shares , vcpu_period and vcpu_quota . These parameters are applied to the vCPU threads. The following shows how the parameters map to cgroup field names: cpu_shares :cpu.shares vcpu_period :cpu.cfs_period_us vcpu_quota :cpu.cfs_quota_us Example 20.98. schedinfo show This example shows the shell guest virtual machine's schedule information Example 20.99. schedinfo set In this example, the cpu_shares is changed to 2046. This effects the current state and not the configuration file. libvirt also supports the emulator_period and emulator_quota parameters that modify the setting of the emulator process. | [
"virsh schedinfo domain --set --current --config --live",
"virsh schedinfo shell Scheduler : posix cpu_shares : 1024 vcpu_period : 100000 vcpu_quota : -1",
"virsh schedinfo --set cpu_shares=2046 shell Scheduler : posix cpu_shares : 2046 vcpu_period : 100000 vcpu_quota : -1"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Setting_schedule_parameters |
2.8.9.2.2. Command Options | 2.8.9.2.2. Command Options Command options instruct iptables to perform a specific action. Only one command option is allowed per iptables command. With the exception of the help command, all commands are written in upper-case characters. The iptables command options are as follows: -A - Appends the rule to the end of the specified chain. Unlike the -I option described below, it does not take an integer argument. It always appends the rule to the end of the specified chain. -D <integer> | <rule> - Deletes a rule in a particular chain by number (such as 5 for the fifth rule in a chain), or by rule specification. The rule specification must exactly match an existing rule. -E - Renames a user-defined chain. A user-defined chain is any chain other than the default, pre-existing chains. (Refer to the -N option, below, for information on creating user-defined chains.) This is a cosmetic change and does not affect the structure of the table. Note If you attempt to rename one of the default chains, the system reports a Match not found error. You cannot rename the default chains. -F - Flushes the selected chain, which effectively deletes every rule in the chain. If no chain is specified, this command flushes every rule from every chain. -h - Provides a list of command structures, as well as a quick summary of command parameters and options. -I [<integer>] - Inserts the rule in the specified chain at a point specified by a user-defined integer argument. If no argument is specified, the rule is inserted at the top of the chain. Important As noted above, the order of rules in a chain determines which rules apply to which packets. This is important to remember when adding rules using either the -A or -I option. This is especially important when adding rules using the -I with an integer argument. If you specify an existing number when adding a rule to a chain, iptables adds the new rule before (or above) the existing rule. -L - Lists all of the rules in the chain specified after the command. To list all rules in all chains in the default filter table, do not specify a chain or table. Otherwise, the following syntax should be used to list the rules in a specific chain in a particular table: iptables -L <chain-name> -t <table-name> Additional options for the -L command option, which provide rule numbers and allow more verbose rule descriptions, are described in Section 2.8.9.2.6, "Listing Options" . -N - Creates a new chain with a user-specified name. The chain name must be unique, otherwise an error message is displayed. -P - Sets the default policy for the specified chain, so that when packets traverse an entire chain without matching a rule, they are sent to the specified target, such as ACCEPT or DROP. -R - Replaces a rule in the specified chain. The rule's number must be specified after the chain's name. The first rule in a chain corresponds to rule number one. -X - Deletes a user-specified chain. You cannot delete a built-in chain. -Z - Sets the byte and packet counters in all chains for a table to zero. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-command_options_for_iptables-command_options |
8.100. java-1.6.0-openjdk | 8.100. java-1.6.0-openjdk 8.100.1. RHBA-2014:1527 - java-1.6.0-openjdk bug fix and enhancement update Updated java-1.6.0-openjdk packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The java-1.6.0-openjdk packages provide the OpenJDK 6 Java Runtime Environment and the OpenJDK 6 Java Software Development Kit. Bug Fixes BZ# 1112806 A bug previously caused the LineBreakMeasurer class to produce the ArrayIndexOutOfBoundsException error when Java attempted to display certain characters in certain fonts. This update fixes the bug and Java now displays the affected characters correctly. BZ# 1098399 Prior to this update, an application accessing an unsynchronized HashMap could potentially enter an infinite loop and consume an excessive amount of CPU resources. As a consequence, the OpenJDK server became unresponsive. This update prevents unsynchronized HashMap access from causing an infinite loop and as a result, the OpenJDK server no longer hangs in the described scenario. In addition, this update adds the following Enhancement BZ# 1059925 Shared Java libraries have been modified to allow users to run Java with the cap_net_bind_service, cap_net_admin, and cap_net_raw capabilities granted. Users of java-1.6.0-openjdk are advised to upgrade to these updated packages, which fix these bugs. All running instances of OpenJDK Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/java-1.6.0-openjdk |
Chapter 3. Installing RHEL AI on AWS | Chapter 3. Installing RHEL AI on AWS To install and deploy Red Hat Enterprise Linux AI on AWS, you must first convert the RHEL AI image into an Amazon Machine Image (AMI). In this process, you create the following resources: An S3 bucket with the RHEL AI image AWS EC2 snapshots An AWS AMI An AWS instance 3.1. Converting the RHEL AI image to an AWS AMI Before deploying RHEL AI on an AWS machine, you must set up a S3 bucket and convert the RHEL AI image to a AWS AMI. Prerequisites You have an Access Key ID configured in the AWS IAM account manager . Procedure Install the AWS command-line tool by following the AWS documentation You need to create a S3 bucket and set the permissions to allow image file conversion to AWS snapshots. Create the necessary environment variables by running the following commands: USD export BUCKET=<custom_bucket_name> USD export RAW_AMI=nvidia-bootc.ami USD export AMI_NAME="rhel-ai" USD export DEFAULT_VOLUME_SIZE=1000 Note On AWS, the DEFAULT_VOLUME_SIZE is measured GBs. You can create an S3 bucket by running the following command: USD aws s3 mb s3://USDBUCKET You must create a trust-policy.json file with the necessary configurations for generating a S3 role for your bucket: USD printf '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ] }' > trust-policy.json Create an S3 role for your bucket that you can name. In the following example command, vmiport is the name of the role. USD aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json You must create a role-policy.json file with the necessary configurations for generating a policy for your bucket: USD printf '{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::%s", "arn:aws:s3:::%s/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ] }' USDBUCKET USDBUCKET > role-policy.json Create a policy for your bucket by running the following command: USD aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json Now that your S3 bucket is set up, you need to download the RAW image from Red Hat Enterprise Linux AI download page Copy the RAW image link and add it to the following command: USD curl -Lo disk.raw <link-to-raw-file> Upload the image to the S3 bucket with the following command: USD aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI Convert the image to a snapshot and store it in the task_id variable name by running the following commands: USD printf '{ "Description": "my-image", "Format": "raw", "UserBucket": { "S3Bucket": "%s", "S3Key": "%s" } }' USDBUCKET USDRAW_AMI > containers.json USD task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId) You can check the progress of the disk image to snapshot conversion job with the following command: USD aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active Once the conversion job is complete, you can get the snapshot ID and store it in a variable called snapshot_id by running the following command: USD snapshot_id=USD(aws ec2 describe-snapshots | jq -r '.Snapshots[] | select(.Description | contains("'USD{task_id}'")) | .SnapshotId') Add a tag name to the snapshot, so it's easier to identify, by running the following command: USD aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value="USDAMI_NAME" Register an AMI from the snapshot with the following command: USD ami_id=USD(aws ec2 register-image \ --name "USDAMI_NAME" \ --description "USDAMI_NAME" \ --architecture x86_64 \ --root-device-name /dev/sda1 \ --block-device-mappings "DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}" \ --virtualization-type hvm \ --ena-support \ | jq -r .ImageId) You can add another tag name to identify the AMI by running the following command: USD aws ec2 create-tags --resources USDami_id --tags Key=Name,Value="USDAMI_NAME" 3.2. Deploying your instance on AWS using the CLI You can launch the AWS instance with your new RHEL AI AMI from the AWS web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch your AWS instance with the custom AMI. If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI AMI. For more information, see "Converting the RHEL AI image to an AWS AMI". You have the AWS command-line tool installed and is properly configured with your aws_access_key_id and aws_secret_access_key. You configured your Virtual Private Cloud (VPC). You created a subnet for your instance. You created a SSH key-pair. You created a security group on AWS. Procedure For various parameters, you need to gather the ID of the variable. To access the image ID, run the following command: USD aws ec2 describe-images --owners self To access the security group ID, run the following command: USD aws ec2 describe-security-groups To access the subnet ID, run the following command: USD aws ec2 describe-subnets Populate environment variables for when you create the instance USD instance_name=rhel-ai-instance USD ami=<ami-id> USD instance_type=<instance-type-size> USD key_name=<key-pair-name> USD security_group=<sg-id> USD disk_size=<size-of-disk> Create your instance using the variables by running the following command: USD aws ec2 run-instances \ --image-id USDami \ --instance-type USDinstance_type \ --key-name USDkey_name \ --security-group-ids USDsecurity_group \ --subnet-id USDsubnet \ --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' \ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]' User account The default user account in the RHEL AI AMI is cloud-user . It has all permissions via sudo without password. Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, you need to run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train | [
"export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000",
"aws s3 mb s3://USDBUCKET",
"printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json",
"aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json",
"printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json",
"curl -Lo disk.raw <link-to-raw-file>",
"aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI",
"printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json",
"task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)",
"aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active",
"snapshot_id=USD(aws ec2 describe-snapshots | jq -r '.Snapshots[] | select(.Description | contains(\"'USD{task_id}'\")) | .SnapshotId')",
"aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"",
"ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)",
"aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"",
"aws ec2 describe-images --owners self",
"aws ec2 describe-security-groups",
"aws ec2 describe-subnets",
"instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>",
"aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/installing/installing_on_aws |
Chapter 18. hostname | Chapter 18. hostname The name of the host where this log message originated. In a Kubernetes cluster, this is the same as kubernetes.host . Data type keyword | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/hostname |
4.4. GFS2 File System Does Not Mount on Newly Added Cluster Node | 4.4. GFS2 File System Does Not Mount on Newly Added Cluster Node If you add a new node to a cluster and find that you cannot mount your GFS2 file system on that node, you may have fewer journals on the GFS2 file system than nodes attempting to access the GFS2 file system. You must have one journal per GFS2 host you intend to mount the file system on (with the exception of GFS2 file systems mounted with the spectator mount option set, since these do not require a journal). You can add journals to a GFS2 file system with the gfs2_jadd command, as described in Section 3.6, "Adding Journals to a GFS2 File System" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-nogfs2mount |
Chapter 9. AWS SQS Sink | Chapter 9. AWS SQS Sink Send message to an AWS SQS Queue 9.1. Configuration Options The following table summarizes the configuration options available for the aws-sqs-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string queueNameOrArn * Queue Name The SQS Queue name or ARN string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string autoCreateQueue Autocreate Queue Setting the autocreation of the SQS queue. boolean false Note Fields marked with an asterisk (*) are mandatory. 9.2. Dependencies At runtime, the aws-sqs-sink Kamelet relies upon the presence of the following dependencies: camel:aws2-sqs camel:core camel:kamelet 9.3. Usage This section describes how you can use the aws-sqs-sink . 9.3.1. Knative Sink You can use the aws-sqs-sink Kamelet as a Knative sink by binding it to a Knative object. aws-sqs-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 9.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 9.3.1.2. Procedure for using the cluster CLI Save the aws-sqs-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-sink-binding.yaml 9.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 9.3.2. Kafka Sink You can use the aws-sqs-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-sqs-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 9.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 9.3.2.2. Procedure for using the cluster CLI Save the aws-sqs-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-sink-binding.yaml 9.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 9.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-sqs-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-sqs-sink-binding.yaml",
"kamel bind channel:mychannel aws-sqs-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-sqs-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/aws-sqs-sink |
2.3. Text-Based Installer | 2.3. Text-Based Installer The text-mode installation option in Red Hat Enterprise Linux 6 is significantly more streamlined than it was in earlier versions. Text-mode installation now omits the more complicated steps that were previously part of the process, and provides you with an uncluttered and straightforward experience. This section describes the changes in behavior when using the text-based installer: Anaconda now automatically selects packages only from the base and core groups. These packages are sufficient to ensure that the system is operational at the end of the installation process, ready to install updates and new packages. Anaconda still presents you with the initial screen from versions that allows you to specify where Anaconda will install Red Hat Enterprise Linux on your system. You can choose to use a whole drive, to remove existing Linux partitions, or to use the free space on the drive. However, Anaconda now automatically sets the layout of the partitions and does not ask you to add or delete partitions or file systems from this basic layout. If you require a customized layout at installation time, you must perform a graphical installation over a VNC connection or a Kickstart installation. More advanced options, such as logical volume management (LVM), encrypted filesystems, and resizable filesystems are still only available in graphical mode and Kickstart. Refer to the Red Hat Enterprise Linux Installation Guide for more information on performing a graphical (VNC) installation. Anaconda now performs boot loader configuration automatically in the text-based installer. Text-mode installations using Kickstart are carried out in the same way that they were in versions. However, because package selection, advanced partitioning, and boot loader configuration are now automated in text mode, Anaconda cannot prompt you for information that it requires during these steps. You must therefore ensure that the Kickstart file includes the packaging, partitioning, and boot loader configurations. If any of this information is missing, Anaconda will exit with an error message. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-installation-text_install |
Chapter 11. Configuring bridge mappings | Chapter 11. Configuring bridge mappings This chapter contains information about configuring bridge mappings in Red Hat OpenStack Platform. 11.1. Overview of bridge mappings A bridge mapping associates a physical network name (an interface label) to a bridge created with OVS or OVN. In this example, the physical name (datacentre) is mapped to the external bridge (br-ex): Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the provider network from the qg-xxx interface of the router and arrives at br-int . For OVS, a patch port between br-int and br-ex then allows the traffic to pass through the bridge of the provider network and out to the physical network. OVN creates a patch port on a hypervisor only when there is a VM bound to the hypervisor that requires the port. You configure bridge mappings on the network node on which the router is scheduled. Router traffic can egress using the correct physical network, as represented by the provider network. 11.2. Traffic flow Each external network is represented by an internal VLAN ID, which is tagged to the router qg-xxx port. When a packet reaches phy-br-ex , the br-ex port strips the VLAN tag and moves the packet to the physical interface and then to the external network. The return packet from the external network arrives on br-ex and moves to br-int using phy-br-ex <-> int-br-ex . When the packet is going through br-ex to br-int , the packet's external vlan ID is replaced by an internal vlan tag in br-int , and this allows qg-xxx to accept the packet. In the case of egress packets, the packet's internal vlan tag is replaced with an external vlan tag in br-ex (or in the external bridge that is defined in the network_vlan_ranges parameter). 11.3. Configuring bridge mappings Red Hat OpenStack Platform (RHOSP) director uses predefined NIC templates to install and configure your initial networking configuration. You can customize aspects of your initial networking configuration, such as bridge mappings, by using the NeutronBridgeMappings parameter in a customized environment file. You call the environment file in the openstack overcloud deploy command. Prerequisites You must configure bridge mappings on the network node on which the router is scheduled. For both ML2/OVS and ML2/OVN DVR configurations, you must configure bridge mappings for the compute nodes, too. Procedure Create a custom environment file and add the NeutronBridgeMappings heat parameter with values that are appropriate for your site. The NeutronBridgeMappings heat parameter associates a physical name ( datacentre ) to a bridge ( br-ex ). Note When the NeutronBridgeMappings parameter is not used, the default maps the external bridge on hosts (br-ex) to a physical name (datacentre). To apply this configuration, deploy the overcloud, adding your custom environment file to the stack along with your other environment files. You are ready for the steps, which are the following: Using the network VLAN ranges, create the provider networks that represent the corresponding external networks. (You use the physical name when creating neutron provider networks or floating IP networks.) Connect the external networks to your project networks with router interfaces. Additional resources Network environment parameters in the Advanced Overcloud Customization guide Including Environment Files in Overcloud Creation in the Advanced Overcloud Customization guide 11.4. Maintaining bridge mappings for OVS After removing any OVS bridge mappings, you must perform a subsequent cleanup to ensure that the bridge configuration is cleared of any associated patch port entries. You can perform this operation in the following ways: Manual port cleanup - requires careful removal of the superfluous patch ports. No outages of network connectivity are required. Automated port cleanup - performs an automated cleanup, but requires an outage, and requires that the necessary bridge mappings be re-added. Choose this option during scheduled maintenance windows when network connectivity outages can be tolerated. Note When OVN bridge mappings are removed, the OVN controller automatically cleans up any associated patch ports. 11.4.1. Cleaning up OVS patch ports manually After removing any OVS bridge mappings, you must also remove the associated patch ports. Prerequisites The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports. A system outage is not required to perform a manual patch port cleanup. You can identify the patch ports to cleanup by their naming convention: In br-USDexternal_bridge patch ports are named phy-<external bridge name> (for example, phy-br-ex2). In br-int patch ports are named int-<external bridge name> (for example, int-br-ex2 ). Procedure Use ovs-vsctl to remove the OVS patch ports associated with the removed bridge mapping entry: Restart neutron-openvswitch-agent : 11.4.2. Cleaning up OVS patch ports automatically After removing any OVS bridge mappings, you must also remove the associated patch ports. Note When OVN bridge mappings are removed, the OVN controller automatically cleans up any associated patch ports. Prerequisites The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports. Cleaning up patch ports automatically with the neutron-ovs-cleanup command causes a network connectivity outage, and should be performed only during a scheduled maintenance window. Use the flag --ovs_all_ports to remove all patch ports from br-int , cleaning up tunnel ends from br-tun , and patch ports from bridge to bridge. The neutron-ovs-cleanup command unplugs all patch ports (instances, qdhcp/qrouter, among others) from all OVS bridges. Procedure Run the neutron-ovs-cleanup command with the --ovs_all_ports flag. Important Perfoming this step will result in a total networking outage. Restore connectivity by redeploying the overcloud. When you rerun the openstack overcloud deploy command, your bridge mapping values are reapplied. Note After a restart, the OVS agent does not interfere with any connections that are not present in bridge_mappings. So, if you have br-int connected to br-ex2 , and br-ex2 has some flows on it, removing br-int from the bridge_mappings configuration does not disconnect the two bridges when you restart the OVS agent or the node. Additional resources Network environment parameters in the Advanced Overcloud Customization guide Including Environment Files in Overcloud Creation in the Advanced Overcloud Customization guide | [
"bridge_mappings = datacentre:br-ex",
"parameter_defaults: NeutronBridgeMappings: \"datacentre:br-ex,tenant:br-tenant\"",
"(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<custom-environment-file>.yaml",
"ovs-vsctl del-port br-ex2 datacentre ovs-vsctl del-port br-tenant tenant",
"service neutron-openvswitch-agent restart",
"/usr/bin/neutron-ovs-cleanup --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --log-file /var/log/neutron/ovs-cleanup.log --ovs_all_ports"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/bridge-mappings |
Chapter 12. Using ContainerSource with Service Mesh | Chapter 12. Using ContainerSource with Service Mesh You can use container source with Service Mesh. 12.1. Configuring ContainerSource with Service Mesh This procedure describes how to configure container source with Service Mesh. Prerequisites You have set up integration of Service Mesh and Serverless. Procedure Create a Service in a namespace that is member of the ServiceMeshMemberRoll : Example event-display-service.yaml configuration file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 2 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest 1 A namespace that is a member of the ServiceMeshMemberRoll . 2 This annotation injects Service Mesh sidecars into the Knative service pods. Apply the Service resource: USD oc apply -f event-display-service.yaml Create a ContainerSource object in a namespace that is member of the ServiceMeshMemberRoll and sink set to the event-display : Example test-heartbeats-containersource.yaml configuration file apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats namespace: <namespace> 1 spec: template: metadata: 2 annotations: sidecar.istio.io/inject": "true" sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: quay.io/openshift-knative/heartbeats name: heartbeats args: - --period=1s env: - name: POD_NAME value: "example-pod" - name: POD_NAMESPACE value: "event-test" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display-service 1 A namespace that is part of the ServiceMeshMemberRoll . 2 These annotations enable Service Mesh integration with the ContainerSource object. Apply the ContainerSource resource: USD oc apply -f test-heartbeats-containersource.yaml Optional: Verify that the events were sent to the Knative event sink by looking at the message dumper function logs: Example command USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" } | [
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 2 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f event-display-service.yaml",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats namespace: <namespace> 1 spec: template: metadata: 2 annotations: sidecar.istio.io/inject\": \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: quay.io/openshift-knative/heartbeats name: heartbeats args: - --period=1s env: - name: POD_NAME value: \"example-pod\" - name: POD_NAMESPACE value: \"event-test\" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display-service",
"oc apply -f test-heartbeats-containersource.yaml",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/eventing/containersource-with-ossm |
Chapter 1. OpenShift Dedicated CLI tools overview | Chapter 1. OpenShift Dedicated CLI tools overview A user performs a range of operations while working on OpenShift Dedicated such as the following: Managing clusters Building, deploying, and managing applications Managing deployment processes Developing Operators Creating and maintaining Operator catalogs OpenShift Dedicated offers a set of command-line interface (CLI) tools that simplify these tasks by enabling users to perform various administration and development operations from the terminal. These tools expose simple commands to manage the applications, as well as interact with each component of the system. 1.1. List of CLI tools The following set of CLI tools are available in OpenShift Dedicated: OpenShift CLI ( oc ) : This is the most commonly used CLI tool by OpenShift Dedicated users. It helps both cluster administrators and developers to perform end-to-end operations across OpenShift Dedicated using the terminal. Unlike the web console, it allows the user to work directly with the project source code using command scripts. Knative CLI (kn) : The Knative ( kn ) CLI tool provides simple and intuitive terminal commands that can be used to interact with OpenShift Serverless components, such as Knative Serving and Eventing. Pipelines CLI (tkn) : OpenShift Pipelines is a continuous integration and continuous delivery (CI/CD) solution in OpenShift Dedicated, which internally uses Tekton. The tkn CLI tool provides simple and intuitive commands to interact with OpenShift Pipelines using the terminal. opm CLI : The opm CLI tool helps the Operator developers and cluster administrators to create and maintain the catalogs of Operators from the terminal. Operator SDK : The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cli_tools/cli-tools-overview |
Chapter 8. Creating flavors for launching instances | Chapter 8. Creating flavors for launching instances An instance flavor is a resource template that specifies the virtual hardware profile for the instance. Cloud users must specify a flavor when they launch an instance. A flavor can specify the quantity of the following resources the Compute service must allocate to an instance: The number of vCPUs. The RAM, in MB. The root disk, in GB. The virtual storage, including secondary ephemeral storage and swap disk. You can specify who can use flavors by making the flavor public to all projects, or private to specific projects or domains. Flavors can use metadata, also referred to as "extra specs", to specify instance hardware support and quotas. The flavor metadata influences the instance placement, resource usage limits, and performance. For a complete list of available metadata properties, see Flavor metadata . You can also use the flavor metadata keys to find a suitable host aggregate to host the instance, by matching the extra_specs metadata set on the host aggregate. To schedule an instance on a host aggregate, you must scope the flavor metadata by prefixing the extra_specs key with the aggregate_instance_extra_specs: namespace. For more information, see Creating and managing host aggregates . A Red Hat OpenStack Platform (RHOSP) deployment includes the following set of default public flavors that your cloud users can use. Table 8.1. Default Flavors Name vCPUs RAM Root Disk Size m1.nano 1 128 MB 1 GB m1.micro 1 192 MB 1 GB Note Behavior set using flavor properties override behavior set using images. When a cloud user launches an instance, the properties of the flavor they specify override the properties of the image they specify. 8.1. Creating a flavor You can create and manage specialized flavors for specific functionality or behaviors, for example: Change default memory and capacity to suit the underlying hardware needs. Add metadata to force a specific I/O rate for the instance or to match a host aggregate. Procedure Create a flavor that specifies the basic resources to make available to an instance: Replace <size_mb> with the size of RAM to allocate to an instance created with this flavor. Replace <size_gb> with the size of root disk to allocate to an instance created with this flavor. Replace <no_vcpus> with the number of vCPUs to reserve for an instance created with this flavor. Optional: Specify the --private and --project options to make the flavor accessible only by a particular project or group of users. Replace <project_id> with the ID of the project that can use this flavor to create instances. If you do not specify the accessibility, the flavor defaults to public, which means that it is available to all projects. Note You cannot make a public flavor private after it has been created. Replace <flavor_name> with a unique name for your flavor. For more information about flavor arguments, see Flavor arguments . Optional: To specify flavor metadata, set the required properties by using key-value pairs: Replace <key> with the metadata key of the property you want to allocate to an instance that is created with this flavor. For a list of available metadata keys, see Flavor metadata . Replace <value> with the value of the metadata key you want to allocate to an instance that is created with this flavor. Replace <flavor_name> with the name of your flavor. For example, an instance that is launched by using the following flavor has two CPU sockets, each with two CPUs: 8.2. Flavor arguments The openstack flavor create command has one positional argument, <flavor_name> , to specify the name of your new flavor. The following table details the optional arguments that you can specify as required when you create a new flavor. Table 8.2. Optional flavor arguments Optional argument Description --id Unique ID for the flavor. The default value, auto , generates a UUID4 value. You can use this argument to manually specify an integer or UUID4 value. --ram (Mandatory) Size of memory to make available to the instance, in MB. Default: 256 MB --disk (Mandatory) Amount of disk space to use for the root (/) partition, in GB. The root disk is an ephemeral disk that the base image is copied into. When an instance boots from a persistent volume, the root disk is not used. Note Creation of an instance with a flavor that has --disk set to 0 requires that the instance boots from volume. Default: 0 GB --ephemeral Amount of disk space to use for the ephemeral disks, in GB. Defaults to 0 GB, which means that no secondary ephemeral disk is created. Ephemeral disks offer machine local disk storage linked to the lifecycle of the instance. Ephemeral disks are not included in any snapshots. This disk is destroyed and all data is lost when the instance is deleted. Default: 0 GB --swap Swap disk size in MB. Do not specify swap in a flavor if the Compute service back end storage is not local storage. Default: 0 GB --vcpus (Mandatory) Number of virtual CPUs for the instance. Default: 1 --public The flavor is available to all projects. By default, a flavor is public and available to all projects. --private The flavor is only available to the projects specified by using the --project option. If you create a private flavor but add no projects to it then the flavor is only available to the cloud administrator. --property Metadata, or "extra specs", specified by using key-value pairs in the following format: --property <key=value> Repeat this option to set multiple properties. --project Specifies the project that can use the private flavor. You must use this argument with the --private option. If you do not specify any projects, the flavor is visible only to the admin user. Repeat this option to allow access to multiple projects. --project-domain Specifies the project domain that can use the private flavor. You must use this argument with the --private option. Repeat this option to allow access to multiple project domains. --description Description of the flavor. Limited to 65535 characters in length. You can use only printable characters. 8.3. Flavor metadata Use the --property option to specify flavor metadata when you create a flavor. Flavor metadata is also referred to as extra specs . Flavor metadata determines instance hardware support and quotas, which influence instance placement, instance limits, and performance. Instance resource usage Use the property keys in the following table to configure limits on CPU, memory and disk I/O usage by instances. Table 8.3. Flavor metadata for resource usage Key Description quota:cpu_shares Specifies the proportional weighted share of CPU time for the domain. Defaults to the OS provided defaults. The Compute scheduler weighs this value relative to the setting of this property on other instances in the same domain. For example, an instance that is configured with quota:cpu_shares=2048 is allocated double the CPU time as an instance that is configured with quota:cpu_shares=1024 . quota:cpu_period Specifies the period of time within which to enforce the cpu_quota , in microseconds. Within the cpu_period , each vCPU cannot consume more than cpu_quota of runtime. Set to a value in the range 1000 - 1000000. Set to 0 to disable. quota:cpu_quota Specifies the maximum allowed bandwidth for the vCPU in each cpu_period , in microseconds: Set to a value in the range 1000 - 18446744073709551. Set to 0 to disable. Set to a negative value to allow infinite bandwidth. You can use cpu_quota and cpu_period to ensure that all vCPUs run at the same speed. For example, you can use the following flavor to launch an instance that can consume a maximum of only 50% CPU of a physical CPU computing capability: Instance disk tuning Use the property keys in the following table to tune the instance disk performance. Note The Compute service applies the following quality of service settings to storage that the Compute service has provisioned, such as ephemeral storage. To tune the performance of Block Storage (cinder) volumes, you must also configure Quality-of-Service (QOS) values for the volume type. For more information, see Use Quality-of-Service Specifications in the Storage Guide . Table 8.4. Flavor metadata for disk tuning Key Description quota:disk_read_bytes_sec Specifies the maximum disk reads available to an instance, in bytes per second. quota:disk_read_iops_sec Specifies the maximum disk reads available to an instance, in IOPS. quota:disk_write_bytes_sec Specifies the maximum disk writes available to an instance, in bytes per second. quota:disk_write_iops_sec Specifies the maximum disk writes available to an instance, in IOPS. quota:disk_total_bytes_sec Specifies the maximum I/O operations available to an instance, in bytes per second. quota:disk_total_iops_sec Specifies the maximum I/O operations available to an instance, in IOPS. Instance network traffic bandwidth Use the property keys in the following table to configure bandwidth limits on the instance network traffic by configuring the VIF I/O options. Note The quota :vif_* properties are deprecated. Instead, you should use the Networking (neutron) service Quality of Service (QoS) policies. For more information about QoS policies, see Configuring Quality of Service (QoS) policies in the Networking Guide . The quota:vif_* properties are only supported when you use the ML2/OVS mechanism driver with NeutronOVSFirewallDriver set to iptables_hybrid . Table 8.5. Flavor metadata for bandwidth limits Key Description quota:vif_inbound_average (Deprecated) Specifies the required average bit rate on the traffic incoming to the instance, in kbps. quota:vif_inbound_burst (Deprecated) Specifies the maximum amount of incoming traffic that can be burst at peak speed, in KB. quota:vif_inbound_peak (Deprecated) Specifies the maximum rate at which the instance can receive incoming traffic, in kbps. quota:vif_outbound_average (Deprecated) Specifies the required average bit rate on the traffic outgoing from the instance, in kbps. quota:vif_outbound_burst (Deprecated) Specifies the maximum amount of outgoing traffic that can be burst at peak speed, in KB. quota:vif_outbound_peak (Deprecated) Specifies the maximum rate at which the instance can send outgoing traffic, in kbps. Hardware video RAM Use the property key in the following table to configure limits on the instance RAM to use for video devices. Table 8.6. Flavor metadata for video devices Key Description hw_video:ram_max_mb Specifies the maximum RAM to use for video devices, in MB. Use with the hw_video_ram image property. hw_video_ram must be less than or equal to hw_video:ram_max_mb . Watchdog behavior Use the property key in the following table to enable the virtual hardware watchdog device on the instance. Table 8.7. Flavor metadata for watchdog behavior Key Description hw:watchdog_action Specify to enable the virtual hardware watchdog device and set its behavior. Watchdog devices perform the configured action if the instance hangs or fails. The watchdog uses the i6300esb device, which emulates a PCI Intel 6300ESB. If hw:watchdog_action is not specified, the watchdog is disabled. Set to one of the following valid values: disabled : (Default) The device is not attached. reset : Force instance reset. poweroff : Force instance shut down. pause : Pause the instance. none : Enable the watchdog, but do nothing if the instance hangs or fails. Note Watchdog behavior that you set by using the properties of a specific image override behavior that you set by using flavors. Random number generator (RNG) Use the property keys in the following table to enable the RNG device on the instance. Table 8.8. Flavor metadata for RNG Key Description hw_rng:allowed Set to False to disable the RNG device that is added to the instance through its image properties. Default: True hw_rng:rate_bytes Specifies the maximum number of bytes that the instance can read from the entropy of the host, per period. hw_rng:rate_period Specifies the duration of the read period in milliseconds. Virtual Performance Monitoring Unit (vPMU) Use the property key in the following table to enable the vPMU for the instance. Table 8.9. Flavor metadata for vPMU Key Description hw:pmu Set to True to enable a vPMU for the instance. Tools such as perf use the vPMU on the instance to provide more accurate information to profile and monitor instance performance. For realtime workloads, the emulation of a vPMU can introduce additional latency which might be undesirable. If the telemetry it provides is not required, set hw:pmu=False . Instance CPU topology Use the property keys in the following table to define the topology of the processors in the instance. Table 8.10. Flavor metadata for CPU topology Key Description hw:cpu_sockets Specifies the preferred number of sockets for the instance. Default: the number of vCPUs requested hw:cpu_cores Specifies the preferred number of cores per socket for the instance. Default: 1 hw:cpu_threads Specifies the preferred number of threads per core for the instance. Default: 1 hw:cpu_max_sockets Specifies the maximum number of sockets that users can select for their instances by using image properties. Example: hw:cpu_max_sockets=2 hw:cpu_max_cores Specifies the maximum number of cores per socket that users can select for their instances by using image properties. hw:cpu_max_threads Specifies the maximum number of threads per core that users can select for their instances by using image properties. Serial ports Use the property key in the following table to configure the number of serial ports per instance. Table 8.11. Flavor metadata for serial ports Key Description hw:serial_port_count Maximum serial ports per instance. CPU pinning policy By default, instance virtual CPUs (vCPUs) are sockets with one core and one thread. You can use properties to create flavors that pin the vCPUs of instances to the physical CPU cores (pCPUs) of the host. You can also configure the behavior of hardware CPU threads in a simultaneous multithreading (SMT) architecture where one or more cores have thread siblings. Use the property keys in the following table to define the CPU pinning policy of the instance. Table 8.12. Flavor metadata for CPU pinning Key Description hw:cpu_policy Specifies the CPU policy to use. Set to one of the following valid values: shared : (Default) The instance vCPUs float across host pCPUs. dedicated : Pin the instance vCPUs to a set of host pCPUs. This creates an instance CPU topology that matches the topology of the CPUs to which the instance is pinned. This option implies an overcommit ratio of 1.0. hw:cpu_thread_policy Specifies the CPU thread policy to use when hw:cpu_policy=dedicated . Set to one of the following valid values: prefer : (Default) The host might or might not have an SMT architecture. If an SMT architecture is present, the Compute scheduler gives preference to thread siblings. isolate : The host must not have an SMT architecture or must emulate a non-SMT architecture. This policy ensures that the Compute scheduler places the instance on a host without SMT by requesting hosts that do not report the HW_CPU_HYPERTHREADING trait. It is also possible to request this trait explicitly by using the following property: If the host does not have an SMT architecture, the Compute service places each vCPU on a different core as expected. If the host does have an SMT architecture, then the behaviour is determined by the configuration of the [workarounds]/disable_fallback_pcpu_query parameter: True : The host with an SMT architecture is not used and scheduling fails. False : The Compute service places each vCPU on a different physical core. The Compute service does not place vCPUs from other instances on the same core. All but one thread sibling on each used core is therefore guaranteed to be unusable. require : The host must have an SMT architecture. This policy ensures that the Compute scheduler places the instance on a host with SMT by requesting hosts that report the HW_CPU_HYPERTHREADING trait. It is also possible to request this trait explicitly by using the following property: The Compute service allocates each vCPU on thread siblings. If the host does not have an SMT architecture, then it is not used. If the host has an SMT architecture, but not enough cores with free thread siblings are available, then scheduling fails. Instance PCI NUMA affinity policy Use the property key in the following table to create flavors that specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Table 8.13. Flavor metadata for PCI NUMA affinity policy Key Description hw:pci_numa_affinity_policy Specifies the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Set to one of the following valid values: required : The Compute service creates an instance that requests a PCI device only when at least one of the NUMA nodes of the instance has affinity with the PCI device. This option provides the best performance. preferred : The Compute service attempts a best effort selection of PCI devices based on NUMA affinity. If this is not possible, then the Compute service schedules the instance on a NUMA node that has no affinity with the PCI device. legacy : (Default) The Compute service creates instances that request a PCI device in one of the following cases: The PCI device has affinity with at least one of the NUMA nodes. The PCI devices do not provide information about their NUMA affinities. Instance NUMA topology You can use properties to create flavors that define the host NUMA placement for the instance vCPU threads, and the allocation of instance vCPUs and memory from the host NUMA nodes. Defining a NUMA topology for the instance improves the performance of the instance OS for flavors whose memory and vCPU allocations are larger than the size of NUMA nodes in the Compute hosts. The Compute scheduler uses these properties to determine a suitable host for the instance. For example, a cloud user launches an instance by using the following flavor: The Compute scheduler searches for a host that has two NUMA nodes, one with 3GB of RAM and the ability to run six CPUs, and the other with 1GB of RAM and two CPUS. If a host has a single NUMA node with capability to run eight CPUs and 4GB of RAM, the Compute scheduler does not consider it a valid match. Note NUMA topologies defined by a flavor cannot be overridden by NUMA topologies defined by the image. The Compute service raises an ImageNUMATopologyForbidden error if the image NUMA topology conflicts with the flavor NUMA topology. Caution You cannot use this feature to constrain instances to specific host CPUs or NUMA nodes. Use this feature only after you complete extensive testing and performance measurements. You can use the hw:pci_numa_affinity_policy property instead. Use the property keys in the following table to define the instance NUMA topology. Table 8.14. Flavor metadata for NUMA topology Key Description hw:numa_nodes Specifies the number of host NUMA nodes to restrict execution of instance vCPU threads to. If not specified, the vCPU threads can run on any number of the available host NUMA nodes. hw:numa_cpus.N A comma-separated list of instance vCPUs to map to instance NUMA node N. If this key is not specified, vCPUs are evenly divided among available NUMA nodes. N starts from 0. Use *.N values with caution, and only if you have at least two NUMA nodes. This property is valid only if you have set hw:numa_nodes , and is required only if the NUMA nodes of the instance have an asymmetrical allocation of CPUs and RAM, which is important for some NFV workloads. hw:numa_mem.N The number of MB of instance memory to map to instance NUMA node N. If this key is not specified, memory is evenly divided among available NUMA nodes. N starts from 0. Use *.N values with caution, and only if you have at least two NUMA nodes. This property is valid only if you have set hw:numa_nodes , and is required only if the NUMA nodes of the instance have an asymmetrical allocation of CPUs and RAM, which is important for some NFV workloads. Warning If the combined values of hw:numa_cpus.N or hw:numa_mem.N are greater than the available number of CPUs or memory respectively, the Compute service raises an exception. Instance memory encryption Use the property key in the following table to enable encryption of instance memory. Table 8.15. Flavor metadata for memory encryption Key Description hw:mem_encryption Set to True to request memory encryption for the instance. For more information, see Configuring AMD SEV Compute nodes to provide memory encryption for instances . CPU real-time policy Use the property keys in the following table to define the real-time policy of the processors in the instance. Note Although most of your instance vCPUs can run with a real-time policy, you must mark at least one vCPU as non-real-time to use for both non-real-time guest processes and emulator overhead processes. To use this extra spec, you must enable pinned CPUs. Table 8.16. Flavor metadata for CPU real-time policy Key Description hw:cpu_realtime Set to yes to create a flavor that assigns a real-time policy to the instance vCPUs. Default: no hw:cpu_realtime_mask Specifies the vCPUs to not assign a real-time policy to. You must prepend the mask value with a caret symbol (^). The following example indicates that all vCPUs except vCPUs 0 and 1 have a real-time policy: Note If the hw_cpu_realtime_mask property is set on the image then it takes precedence over the hw:cpu_realtime_mask property set on the flavor. Emulator threads policy You can assign a pCPU to an instance to use for emulator threads. Emulator threads are emulator processes that are not directly related to the instance. A dedicated emulator thread pCPU is required for real-time workloads. To use the emulator threads policy, you must enable pinned CPUs by setting the following property: Use the property key in the following table to define the emulator threads policy of the instance. Table 8.17. Flavor metadata for the emulator threads policy Key Description hw:emulator_threads_policy Specifies the emulator threads policy to use for instances. Set to one of the following valid values: share : The emulator thread floats across the pCPUs defined in the NovaComputeCpuSharedSet heat parameter. If NovaComputeCpuSharedSet is not configured, then the emulator thread floats across the pinned CPUs that are associated with the instance. isolate : Reserves an additional dedicated pCPU per instance for the emulator thread. Use this policy with caution, as it is prohibitively resource intensive. unset: (Default) The emulator thread policy is not enabled, and the emulator thread floats across the pinned CPUs associated with the instance. Instance memory page size Use the property keys in the following table to create an instance with an explicit memory page size. Table 8.18. Flavor metadata for memory page size Key Description hw:mem_page_size Specifies the size of large pages to use to back the instances. Use of this option creates an implicit NUMA topology of 1 NUMA node unless otherwise specified by hw:numa_nodes . Set to one of the following valid values: large : Selects a page size larger than the smallest page size supported on the host, which can be 2 MB or 1 GB on x86_64 systems. small : Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the largest available huge page size, as determined by the libvirt driver. <pagesize> : (String) Sets an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB , 2MB , 2048 , 1GB . unset: (Default) Large pages are not used to back instances and no implicit NUMA topology is generated. PCI passthrough Use the property key in the following table to attach a physical PCI device, such as a graphics card or a network device, to an instance. For more information about using PCI passthrough, see Configuring PCI passthrough . Table 8.19. Flavor metadata for PCI passthrough Key Description pci_passthrough:alias Specifies the PCI device to assign to an instance by using the following format: Replace <alias> with the alias that corresponds to a particular PCI device class. Replace <count> with the number of PCI devices of type <alias> to assign to the instance. Hypervisor signature Use the property key in the following table to hide the hypervisor signature from the instance. Table 8.20. Flavor metadata for hiding hypervisor signature Key Description hide_hypervisor_id Set to True to hide the hypervisor signature from the instance, to allow all drivers to load and work on the instance. Instance resource traits Each resource provider has a set of traits. Traits are the qualitative aspects of a resource provider, for example, the type of storage disk, or the Intel CPU instruction set extension. An instance can specify which of these traits it requires. The traits that you can specify are defined in the os-traits library. Example traits include the following: COMPUTE_TRUSTED_CERTS COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG COMPUTE_IMAGE_TYPE_RAW HW_CPU_X86_AVX HW_CPU_X86_AVX512VL HW_CPU_X86_AVX512CD For details about how to use the os-traits library, see https://docs.openstack.org/os-traits/latest/user/index.html . Use the property key in the following table to define the resource traits of the instance. Table 8.21. Flavor metadata for resource traits Key Description trait:<trait_name> Specifies Compute node traits. Set the trait to one of the following valid values: required : The Compute node selected to host the instance must have the trait. forbidden : The Compute node selected to host the instance must not have the trait. Example: Instance bare-metal resource class Use the property key in the following table to request a bare-metal resource class for an instance. Table 8.22. Flavor metadata for bare-metal resource class Key Description resources:<resource_class_name> Use this property to specify standard bare-metal resource classes to override the values of, or to specify custom bare-metal resource classes that the instance requires. The standard resource classes that you can override are VCPU , MEMORY_MB and DISK_GB . To prevent the Compute scheduler from using the bare-metal flavor properties for scheduling instance, set the value of the standard resource classes to 0 . The name of custom resource classes must start with CUSTOM_ . To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_. For example, to schedule instances on a node that has --resource-class baremetal.SMALL , create the following flavor: | [
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_vcpus> [--private --project <project_id>] <flavor_name>",
"(overcloud)USD openstack flavor set --property <key=value> --property <key=value> ... <flavor_name>",
"(overcloud)USD openstack flavor set --property hw:cpu_sockets=2 --property hw:cpu_cores=2 processor_topology_flavor",
"openstack flavor set cpu_limits_flavor --property quota:cpu_quota=10000 --property quota:cpu_period=20000",
"--property trait:HW_CPU_HYPERTHREADING=forbidden",
"--property trait:HW_CPU_HYPERTHREADING=required",
"openstack flavor set numa_top_flavor --property hw:numa_nodes=2 --property hw:numa_cpus.0=0,1,2,3,4,5 --property hw:numa_cpus.1=6,7 --property hw:numa_mem.0=3072 --property hw:numa_mem.1=1024",
"openstack flavor set <flavor> --property hw:cpu_realtime=\"yes\" --property hw:cpu_realtime_mask=^0-1",
"--property hw:cpu_policy=dedicated",
"<alias>:<count>",
"openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required avx512-flavor",
"openstack flavor set --property resources:CUSTOM_BAREMETAL_SMALL=1 --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-small"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_creating-flavors-for-launching-instances_instance-flavors |
Chapter 12. Grouping LVM objects with tags | Chapter 12. Grouping LVM objects with tags You can assign tags to Logical Volume Manager (LVM) objects to group them. With this feature, you can automate the control of LVM behavior, such as activation, by a group. You can also use tags instead of LVM objects arguments. 12.1. LVM object tags A Logical Volume Manager (LVM) tag groups LVM objects of the same type. You can attach tags to objects such as physical volumes, volume groups, and logical volumes . To avoid ambiguity, prefix each tag with @ . Each tag is expanded by replacing it with all the objects that possess that tag and that are of the type expected by its position on the command line. LVM tags are strings of up to 1024 characters. LVM tags cannot start with a hyphen. A valid tag consists of a limited range of characters only. The allowed characters are A-Z a-z 0-9 _ + . - / = ! : # & . Only objects in a volume group can be tagged. Physical volumes lose their tags if they are removed from a volume group; this is because tags are stored as part of the volume group metadata and that is deleted when a physical volume is removed. You can apply some commands to all volume groups (VG), logical volumes (LV), or physical volumes (PV) that have the same tag. The man page of the given command shows the syntax, such as VG|Tag , LV|Tag , or PV|Tag when you can substitute a tag name for a VG, LV, or PV name. 12.2. Adding tags to LVM objects You can add tags to LVM objects to group them by using the --addtag option with various volume management commands. Prerequisites The lvm2 package is installed. Procedure To add a tag to an existing PV, use: To add a tag to an existing VG, use: To add a tag to a VG during creation, use: To add a tag to an existing LV, use: To add a tag to a LV during creation, use: 12.3. Removing tags from LVM objects If you no longer want to keep your LVM objects grouped, you can remove tags from the objects by using the --deltag option with various volume management commands. Prerequisites The lvm2 package is installed. You have created tags on physical volumes (PV), volume groups (VG), or logical volumes (LV). Procedure To remove a tag from an existing PV, use: To remove a tag from an existing VG, use: To remove a tag from an existing LV, use: 12.4. Displaying tags on LVM objects You can display tags on your LVM objects with the following commands. Prerequisites The lvm2 package is installed. You have created tags on physical volumes (PV), volume groups (VG), or logical volumes (LV). Procedure To display all tags on an existing PV, use: To display all tags on an existing VG, use: To display all tags on an existing LV, use: 12.5. Controlling logical volume activation with tags This procedure describes how to specify in the configuration file that only certain logical volumes should be activated on that host. Procedure For example, the following entry acts as a filter for activation requests (such as vgchange -ay ) and only activates vg1/lvol0 and any logical volumes or volume groups with the database tag in the metadata on that host: The special match @* that causes a match only if any metadata tag matches any host tag on that machine. As another example, consider a situation where every machine in the cluster has the following entry in the configuration file: If you want to activate vg1/lvol2 only on host db2 , do the following: Run lvchange --addtag @db2 vg1/lvol2 from any host in the cluster. Run lvchange -ay vg1/lvol2 . This solution involves storing host names inside the volume group metadata. | [
"pvchange --addtag <@tag> <PV>",
"vgchange --addtag <@tag> <VG>",
"vgcreate --addtag <@tag> <VG>",
"lvchange --addtag <@tag> <LV>",
"lvcreate --addtag <@tag>",
"pvchange --deltag @tag PV",
"vgchange --deltag @tag VG",
"lvchange --deltag @tag LV",
"pvs -o tags <PV>",
"vgs -o tags <VG>",
"lvs -o tags <LV>",
"activation { volume_list = [\"vg1/lvol0\", \"@database\" ] }",
"tags { hosttags = 1 }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/grouping-lvm-objects-with-tags_configuring-and-managing-logical-volumes |
Chapter 3. Installing power monitoring for Red Hat OpenShift | Chapter 3. Installing power monitoring for Red Hat OpenShift Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install power monitoring for Red Hat OpenShift by deploying the Power monitoring Operator in the OpenShift Container Platform web console. 3.1. Installing the Power monitoring Operator As a cluster administrator, you can install the Power monitoring Operator from OperatorHub by using the OpenShift Container Platform web console. Warning You must remove any previously installed versions of the Power monitoring Operator before installation. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure In the Administrator perspective of the web console, go to Operators OperatorHub . Search for power monitoring , click the Power monitoring for Red Hat OpenShift tile, and then click Install . Click Install again to install the Power monitoring Operator. Power monitoring for Red Hat OpenShift is now available in all namespaces of the OpenShift Container Platform cluster. Verification Verify that the Power monitoring Operator is listed in Operators Installed Operators . The Status should resolve to Succeeded . 3.2. Deploying Kepler You can deploy Kepler by creating an instance of the Kepler custom resource definition (CRD) by using the Power monitoring Operator. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and go to the Kepler tab. Click Create Kepler . On the Create Kepler page, ensure the Name is set to kepler . Important The name of your Kepler instance must be set to kepler . All other instances are ignored by the Power monitoring Operator. Click Create to deploy Kepler and power monitoring dashboards. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/power_monitoring/installing-power-monitoring |
Subsets and Splits