title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 55. Installation and Booting
Chapter 55. Installation and Booting Certain RPM packages are not available on binary DVDs The virt-p2v RPM, syslinux-tftpboot , LibreOffice , and KDE language packages are not available on Red Hat Enterprise Linux binary DVDs due to the limited size of a single-layered DVD. The packages are still consumable by Red Hat Subscription Management and Red Hat Network by enabling the relevant updates after Anaconda installation. The packages can also be downloaded from https://access.redhat.com/downloads . In addition, the syslinux-tftpboot package has been moved from the Optional channel to the Base channel (Server variant) and it is now also available for the IBM POWER, little endian architecture. (BZ# 1611665 , BZ#1592748, BZ#1616396) The content location detection code is not working on Red Hat Virtualization Hosts Red Hat Virtualization Hosts cannot select the hardening profile from locally-installed content. To work around this problem, use the oscap-anaconda-addon package to fetch the Red Hat Enterprise Linux datastream file from a URL. 1. Upload the ssg-rhel7-ds.xml datastream file from the Red Hat Enterprise Linux 7 scap-security-guide package to your network so it can be discovered by Anaconda. To do so: a) Use Python to set up a web server in a directory that contains the ssg-rhel7-ds.xml datastream file and listens on port 8000. Example: python2 -m SimpleHTTPServer, or python3 -m http.server. or, b) Upload the ssg-rhel7-ds.xml datastream file to a HTTPS or FTP Server. 2. In the Security Policy window of Anaconda's Graphical User Interface, click Change Content and enter the URL that points to the ssg-rhel7-ds.xml datastream file, for example: http://gateway:8000/ssg-rhel7-ds.xml or ftp://my-ftp-server/ssg-rhel7-ds.xml. The ssg-rhel7-ds.xml datastream file is now available and Red Hat Virtualization Hosts can select the hardening profile. (BZ# 1636847 ) Composer can not create live ISO system images Because of a dependency resolution issue, tools for building live ISO images are not recognized by Composer at run time. As a consequence, Composer fails to build live ISO images and users can not create this type of system image. Composer is available as a Technology Preview. (BZ#1642156) NVDIMM commands are not added to kickstart script file anaconda-ks.cfg after installation The installer creates a kickstart script equivalent to the configuration used for installation of the system. This script is stored in the file /root/anaconda-ks.cfg . However, when the interactive graphical user interface is used for installation, the recently added nvdimm commands used for configuring Non-Volatile Dual In-line Memory (NVDIMM) devices are not added to this file. To create a kickstart script for reproduction of the installation, users must note the settings for NVDIMM devices and add the missing commands to the file manually. (BZ# 1620109 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_installation_and_booting
Chapter 9. Uninstalling a cluster on OpenStack
Chapter 9. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 9.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note If you deployed your cluster to the AWS C2S Secret Region, the installation program does not support destroying the cluster; you must manually remove the cluster resources. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_openstack/uninstalling-cluster-openstack
1.4. AdminShell
1.4. AdminShell 1.4.1. AdminShell Features AdminShell provides the following features: Administration AdminShell can be used to connect to a JBoss Data Virtualization instance in order to perform various administrative tasks. Data Access AdminShell can be used to connect to a virtual database (VDB) and run SQL commands to query VDB data and view results. Migration AdminShell can be used to develop scripts that will move VDBs and associated components from one development environment to another. (Users can test and automate migration scripts before executing them in production deployments.) Testing The built-in JUnit Test Framework allows users to write regression tests to check system health and data integrity. The written tests validate system functionality automatically, removing the need for manual verification by QA Personnel. 1.4.2. AdminShell Scripting Use the Groovy language to write AdminShell scripts to automate tasks. AdminShell provides a custom library of Groovy functions to help administer JBoss Data Virtualization. These custom functions can be used in isolation or alongside other Groovy code. Here are some basic rules. Memorise these to become familiar with the syntax: All commands and functions are case sensitive. Groovy commands are not required to end with a semicolon; this is optional. A function will take zero or more parameters as input. Function parameters are provided within parentheses. String parameters must be provided within single or double quotes. Other Java classes and methods can be accessed and invoked from within scripts as long as required libraries are already in the class path. (JAR files added to the lib directory will be included in the class path automatically.) Note You should disconnect from JBoss Data Virtualization before exiting AdminShell by entering the disconnect() command. References For more information about Groovy scripting, refer to http://groovy.codehaus.org/ . For more information about Groovy scripting and SQL, refer to http://groovy.codehaus.org/Database+features . For more information about testing using Groovy scripting, refer to http://groovy.codehaus.org/Unit+Testing and http://junit.org/ . 1.4.3. AdminShell Help All of the custom administrative methods available in AdminShell can be listed by using the adminHelp() method (note that only the custom AdminShell methods will be shown): If you require a specific method's definition and input parameters, use adminHelp(" METHOD ") : Use the sqlHelp() method to list all SQL extension methods: If you want to obtain a specific method's definition and input parameters, use sqlHelp(" METHOD ") . 1.4.4. Install the Interactive AdminShell The AdminShell can be installed and run from any machine that has access to the running Red Hat JDV server. Procedure 1.3. Install the AdminShell Anywhere Open a command line terminal. Go to the AdminShell directory: cd / EAP_HOME /dataVirtualization/teiid-adminshell/ . Copy the zipped teiid-VERSION-adminshell-dist.zip file to your desired location. Unzip the AdminShell: unzip teiid-VERSION-adminshell-dist.zip 1.4.5. Launch the Interactive AdminShell Procedure 1.4. Launch the Interactive AdminShell Run the ./adminshell.sh command. The log output is as follows: Result The interactive AdminShell is running. At this point you can execute individual commands line by line. Note You can exit the interactive AdminShell at any time by entering the exit command. 1.4.6. AdminShell Basic Commands Table 1.1. AdminShell Basic Commands Command Description adminHelp(); Displays all of the custom AdminShell methods available. adminHelp(" METHOD "); Displays the definition and input parameters for the AdminShell method supplied. sqlHelp(); Displays all of the custom SQL AdminShell methods available. sqlHelp(" METHOD "); Displays the definition and input parameters for the SQL AdminShell method supplied. println " STRING "; Print something to console. sql = connect(); Get an extended Groovy SQL connection using the default connection specified in the connection.properties file. sql.execute(" SQL "); Run any SQL command. connectAsAdmin(); Connects as an admin user using the default connection specified in the connection.properties file. This command does not require a VDB name. Note that this is an admin connection, not a JDBC connection, so you cannot issue SQL commands using this connection. Note that if SSL is being used, you would need to adjust the connection URL and the client SSL settings as necessary (typically only for 2-way SSL). println getConnectionName(); Returns the active connection name. useConnection(" cNAME "); Switches to using the given connection. disconnect(); Disconnects the active connection. disconnectAll(); Disconnects all connections (both active and suspended). 1.4.7. Execute a Script in Non-interactive AdminShell Procedure 1.5. Execute a Script in Non-interactive AdminShell Open a Command Line Terminal Run the Script Run the /adminshell.sh load [-D param=value ] PATH/FILENAME command. Note Optional properties passed in using -D can be accessed from within the script file by using System.getProperty() . Important Any connections established within the script should be disconnected before the script finishes. 1.4.8. Helpful Tips for the Interactive AdminShell Table 1.2. Interactive AdminShell Commands Command Description ^ ('up arrow') Displays commands executed, starting with the most recent. help Displays additional commands provided specifically for use within the interactive shell. Shell Behavior The interactive shell uses a special interpreter and displays behavior different from what would be expected from running a Groovy script: def statements do not define a variable in the context of the shell; For example, do not use def x = 1 , use x = 1 . The shell cannot parse Groovy classes that use annotations. 1.4.9. Save a Script in Interactive AdminShell Procedure 1.6. Save a Script in Interactive AdminShell Open the Interactive AdminShell Open a command line terminal. Navigate to / EAP_HOME /dataVirtualization/teiid-adminshell- VERSION / . Run the ./adminshell.sh command. Start Recording to a File Enter the record start PATH/FILENAME command. Enter Desired Commands to Record Enter a series of commands for which you want to record the input/output. Stop Recording to the File Enter the record stop command. Result All input and output between the moments you issue the record start and record stop commands are captured in PATH/FILENAME . Note Since both input and output are recorded, the file will need some editing before it can be executed as a script file. 1.4.10. Execute a Script in Interactive AdminShell Procedure 1.7. Execute a Script in Interactive AdminShell Open the Interactive AdminShell Open a command line terminal. Navigate to / EAP_HOME /dataVirtualization/teiid-adminshell- VERSION / . Run the ./adminshell.sh command. Run the Script Run the Script using the Interactive Shell load Command Run the load PATH/FILENAME command. Run the Script using the Groovy evaluate Command Run the evaluate(" PATH/FILENAME " as File) command. Note You can also execute a script by entering it line by line after opening the interactive shell. 1.4.11. Launch the AdminShell GUI Procedure 1.8. Launch the AdminShell GUI Navigate to the adminshell Directory Open a command line terminal. Navigate to / EAP_HOME /dataVirtualization/teiid-adminshell- VERSION / . Launch the adminshell-console Script Run the ./adminshell-console.sh command. Result The AdminShell GUI is displayed. Note You can exit the AdminShell GUI at any time by clicking on File Exit . 1.4.12. Save a Script in AdminShell GUI Procedure 1.9. Save a Script in AdminShell GUI Navigate to the adminshell Directory Open a command line terminal. Navigate to / EAP_HOME /dataVirtualization/teiid-adminshell- VERSION / . Type a Script Type your script in the upper script window of the AdminShell GUI. You may find it useful to test the script as you are preparing it by executing it via Script Run . Save the Script Select File Save As . Choose a location and filename for the script and click on Save . 1.4.13. Execute a Script in AdminShell GUI Procedure 1.10. Execute a Script in AdminShell GUI Open the AdminShell GUI Open a command line terminal. Navigate to / EAP_HOME /dataVirtualization/teiid-adminshell- VERSION / . Run the ./adminshell-console.sh command. Prepare the Script Type a New Script Type your script in the upper script window of the AdminShell GUI. Load a Saved Script Select File Open . Locate the desired script file and click on Open . The script will appear in the upper script window of the AdminShell GUI. Run the Script Select Script Run . 1.4.14. AdminShell Connection Properties The EAP_HOME /dataVirtualization/teiid-adminshell- VERSION /connection.properties file provides the default connection properties used by AdminShell to connect to an JBoss Data Virtualization instance: A call to connect() or connectionAsAdmin() without any input parameters will connect to JBoss Data Virtualization using the settings defined in this properties file. connect() uses those properties prefixed with jdbc and connectAsAdmin() uses those properties prefixed with admin . Alternatively, you can include parameters in the connect() or connectAsAdmin() methods to connect using properties of your choice: Warning Do not leave passwords stored in clear text. The example above is for demonstration purposes only. If you want to store your password in the file, take the necessary measures to secure it. Otherwise, do not use this feature at all and, instead, input your password interactively (or some other secure way.) 1.4.15. Multiple Connections in AdminShell Using AdminShell, users can manage more than one connection to one or more JBoss Data Virtualization instances. For example, a user might want to have one connection to a development server and another to an integration server, simultaneously. Every time a new connection is made, it is assigned a unique name and it becomes the active connection. If there is already a connection, it will be suspended (it will not be closed). The getConnectionName() method returns the name of the active connection. The connection name can be assigned to a variable cName by the following command: The name of the current connection is required in order to switch from one to another. To change the active connection, use the useConnection() command, supplying the name (or a variable with the name assigned) of the connection you wish to use: Example The following example demonstrates how to use two connections and switch between them: 1.4.16. Example Scripts Example 1.1. Deploying a VDB Example 1.2. Create a Datasource (Oracle) Example 1.3. Execute SQL Query against Teiid 1.4.17. AdminShell FAQ Q: Why won't the adminhelp command work in the GUI tool? Q: Are there any pre-built scripts available? Q: What is the difference between connectAsAdmin() and connect()? Q: What does getAdmin() do? Why do I need it? Q: Is IDE support available for writing the scripts? Q: Can I use AdminShell methods in other environments? Q: Is debugging support available? Q: Why won't the adminhelp command work in the GUI tool? A: The AdminShell GUI environment does not understand shell commands such as load , help , and adminhelp , since they are not directly supported by Groovy. In the GUI you should use the equivalent functional form; for example, instead of using adminhelp use adminHelp() . Q: Are there any pre-built scripts available? A: Not currently. Q: What is the difference between connectAsAdmin() and connect() ? A: The connectAsAdmin() method creates a contextual connection to the AdminAPI of JBoss Data Virtualization. The connect() method returns an extension of the Groovy SQL object to be used for SQL calls to JBoss Data Virtualization. Q: What does getAdmin() do? Why do I need it? A: The getAdmin() method returns the current contextual connection object that was created when you connected with connectAsAdmin() . This object implements the interface org.teiid.adminapi.Admin . The AdminShell commands provided are wrappers around this API. Advanced users can use this API directly if the provided wrapper commands do not meet their needs. Q: Is IDE support available for writing the scripts? A: The AdminShell GUI tool is a light-weight IDE. Full IDE support is available for Groovy, but requires manual manipulation of the class path and script imports. Q: Can I use AdminShell methods in other environments? A: The AdminShell methods (including the named contextual connections, AdminAPI wrapper, and help system) have no direct dependencies on Groovy and can be used in other scripting languages. To use the AdminShell methods in another language, simply import the static methods and Admin classes into your script. You will also need to ensure that the EAP_HOME /dataVirtualization/teiid-adminshell- VERSION /lib/teiid- VERSION .jar and EAP_HOME /dataVirtualization/teiid-adminshell- VERSION /lib/teiid-adminshell- VERSION .jar are in your class path. The following snippet shows import statements that would work in Java, BeanShell, Groovy and others: import static org.teiid.adminshell.AdminShell.*; import org.teiid.adminapi.*; Q: Is debugging support available? A: The Interactive AdminShell and GUI have built-in support for inspection of the current state; however, line based debugging is beyond the scope of this document.
[ "connectAsAdmin(\"localhost\", \"9999\", \"user\", \"password\", \"conn1\")", "import my.package.*; myObject = new MyClass(); myObject.doSomething();", "adminHelp()", "adminHelp(\"deploy\") /* *Deploy a VDB from file */ void deploy( String /* file name */) throws AdminException throws FileNotFoundException", "sqlHelp()", "====================================================================== Teiid AdminShell Bootstrap Environment TEIID_HOME = / EAP_HOME /dataVirtualization/teiid-adminshell- VERSION CLASSPATH = / EAP_HOME /dataVirtualization/teiid-adminshell- VERSION /lib/patches/*:/ EAP_HOME /dataVirtualization/teiid-adminshell- VERSION /lib/teiid-adminshell- VERSION .jar:/ EAP_HOME /dataVirtualization/teiid-adminshell- VERSION /lib/* JAVA = /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java ====================================================================== ===> [import static org.teiid.adminshell.AdminShell.*; import static org.teiid.adminshell.GroovySqlExtensions.*; import org.teiid.adminapi.*;] Groovy Shell (1.7.2, JVM: 1.7.0_25) Type 'help' or '\\h' for help. ------------------------------------------------------------------------------- groovy:000>", "value = System.getProperty(\" param \")", "jdbc.user=user jdbc.password=user jdbc.url=jdbc:teiid:admin@mm://localhost:31000; admin.host=localhost admin.port=9999 admin.user=admin admin.password=admin", "connect(\" URL \", \" USER \", \" PASSWORD \")", "cName = getConnectionName();", "useConnection(cName);", "// Create a connection connectAsAdmin(); //Assign the connection name to conn1 conn1 = getConnectionName(); deploy(\"file.vdb\") // Create a second connection (which is now the active connection) connectAsAdmin(); //Assign the new connection name to conn2 conn2 = getConnectionName(); deploy(\"file.vdb\") // Switch the active connection back to conn1 useConnection(conn1); // Close the active connection (conn1) disconnect();", "connectAsAdmin(); deploy(\"/path/to/<name>.vdb\"); // check to validate the deployment VDB vdb = getVDB(\"<name>\", 1); if (vdb != null){ print (vdb.getName()+\".\"+vdb.getVersion()+\" is deployed\"; } else { print (\"<name>.vdb failed to deploy\"; }", "connectAsAdmin(); // first deploy the JDBC jar file for Oracle deploy(\"ojdbc6.jar\"); props = new Properties(); props.setProperty(\"connection-url\",\"jdbc:oracle:thin:@<host>:1521:<sid>\"); props.setProperty(\"user-name\", \"scott\"); props.setProperty(\"password\", \"tiger\"); createDataSource(\"oracleDS\", \"ojdbc6.jar\", props);", "sql = connect(\"jdbc:teiid:<vdb>@mm://<host>:31000\", \"user\", \"user\"); // select sql.eachRow(\"select * from sys.tables\") { println \"USD{it}\" } // update, insert, delete sql.execute(<sql command>);", "import static org.teiid.adminshell.AdminShell.*; import org.teiid.adminapi.*;" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/sect-AdminShell
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/feedback_automating-sap-hana-scale-out
Chapter 1. Preparing to install on Nutanix
Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 5.20.4+ or 6.5.1+ Prism Central 2022.4+ 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role and assigning a role . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object Required permissions in Nutanix API Description Categories Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets View_Subnet View the subnets that host the OpenShift Container Platform machines. 1.2.2. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.3. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.4. Networking requirements You must use AHV IP Address Management (IPAM) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.4.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.4.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_nutanix/preparing-to-install-on-nutanix
Chapter 21. Troubleshooting IdM replica installation
Chapter 21. Troubleshooting IdM replica installation The following sections describe the process for gathering information about a failing IdM replica installation, and how to resolve some common installation issues. 21.1. IdM replica installation error log files When you install an Identity Management (IdM) replica, debugging information is appended to the following log files on the replica : /var/log/ipareplica-install.log /var/log/ipareplica-conncheck.log /var/log/ipaclient-install.log /var/log/httpd/error_log /var/log/dirsrv/slapd- INSTANCE-NAME /access /var/log/dirsrv/slapd- INSTANCE-NAME /errors /var/log/ipaserver-install.log The replica installation process also appends debugging information to the following log files on the IdM server the replica is contacting: /var/log/httpd/error_log /var/log/dirsrv/slapd- INSTANCE-NAME /access /var/log/dirsrv/slapd- INSTANCE-NAME /errors The last line of each log file reports success or failure, and ERROR and DEBUG entries provide additional context. Additional resources Reviewing IdM replica installation errors 21.2. Reviewing IdM replica installation errors To troubleshoot a failing IdM replica installation, review the errors at the end of the installation error log files on the new replica and the server, and use this information to resolve any corresponding issues. Prerequisites You must have root privileges to display the contents of IdM log files. Procedure Use the tail command to display the latest errors from the primary log file /var/log/ipareplica-install.log . The following example displays the last 10 lines. To review the log file interactively, open the end of the log file using the less utility and use the ^ and v arrow keys to navigate. Optional: While /var/log/ipareplica-install.log is the primary log file for a replica installation, you can gather additional troubleshooting information by repeating this review process with additional files on the replica and the server. On the replica: On the server: Additional resources IdM replica installation error log files If you are unable to resolve a failing replica installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the replica and an sosreport of the server. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 21.3. IdM CA installation error log files Installing the Certificate Authority (CA) service on an Identity Management (IdM) replica appends debugging information to several locations on the replica and the IdM server the replica communicates with. Table 21.1. On the replica (in order of recommended priority): Location Description /var/log/pki/pki-ca-spawn. USDTIME_OF_INSTALLATION .log High-level issues and Python traces for the pkispawn installation process journalctl -u pki-tomcatd@pki-tomcat output Errors from the pki-tomcatd@pki-tomcat service /var/log/pki/pki-tomcat/ca/debug. USDDATE .log Large JAVA stacktraces of activity in the core of the Public Key Infrastructure (PKI) product /var/log/pki/pki-tomcat/ca/signedAudit/ca_audit Audit log of the PKI product /var/log/pki/pki-tomcat/ca/system /var/log/pki/pki-tomcat/ca/transactions /var/log/pki/pki-tomcat/catalina. USDDATE .log Low-level debug data of certificate operations for service principals, hosts, and other entities that use certificates On the server contacted by the replica: /var/log/httpd/error_log log file Installing the CA service on an existing IdM replica also writes debugging information to the following log file: /var/log/ipareplica-ca-install.log log file Note If a full IdM replica installation fails while installing the optional CA component, no details about the CA are logged; a message is logged in the /var/log/ipareplica-install.log file indicating that the overall installation process failed. Review the log files listed above for details specific to the CA installation failure. The only exception to this behavior is when you are installing the CA service and the root CA is an external CA. If there is an issue with the certificate from the external CA, errors are logged in /var/log/ipareplica-install.log . Additional resources Reviewing IdM CA installation errors 21.4. Reviewing IdM CA installation errors To troubleshoot a failing IdM CA installation, review the errors at the end of the CA installation error log files and use this information to resolve any corresponding issues. Prerequisites You must have root privileges to display the contents of IdM log files. Procedure To review a log file interactively, open the end of the log file using the less utility and use the ^ and v arrow keys to navigate, while searching for ScriptError entries. The following example opens /var/log/pki/pki-ca-spawn. USDTIME_OF_INSTALLATION .log . Gather additional troubleshooting information by repeating this review process with all the CA installation error log files. Additional resources IdM CA installation error log files If you are unable to resolve a failing IdM server installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the server. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 21.5. Removing a partial IdM replica installation If an IdM replica installation fails, some configuration files might be left behind. Additional attempts to install the IdM replica can fail and the installation script reports that IPA is already configured: Example system with existing partial IdM configuration To resolve this issue, uninstall IdM software from the replica, remove the replica from the IdM topology, and retry the installation process. Prerequisites You must have root privileges. Procedure Uninstall the IdM server software on the host you are trying to configure as an IdM replica. On all other servers in the topology, use the ipa server-del command to delete any references to the replica that did not install properly. Attempt installing the replica. If you continue to experience difficulty installing an IdM replica because of repeated failed installations, reinstall the operating system. One of the requirements for installing an IdM replica is a clean system without any customization. Failed installations may have compromised the integrity of the host by unexpectedly modifying system files. Additional resources For additional details on uninstalling an IdM replica, see Uninstalling an IdM replica . If installation attempts fail after repeated uninstallation attempts, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the replica and an sosreport of the server. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 21.6. Resolving invalid credential errors If an IdM replica installation fails with an Invalid credentials error, the system clocks on the hosts might be out of sync with each other: If you use the --no-ntp or -N options to attempt the replica installation while clocks are out of sync, the installation fails because services are unable to authenticate with Kerberos. To resolve this issue, synchronize the clocks on both hosts and retry the installation process. Prerequisites You must have root privileges to change system time. Procedure Synchronize the system clocks manually or with chronyd . Synchronizing manually Display the system time on the server and set the replica's time to match. Synchronizing with chronyd : See Using the Chrony suite to configure NTP to configure and set system time with chrony tools. Attempt the IdM replica installation again. Additional resources If you are unable to resolve a failing replica installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the replica and an sosreport of the server. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 21.7. Additional resources Troubleshooting the first IdM server installation Troubleshooting IdM client installation Backing up and restoring IdM
[ "[user@replica ~]USD sudo tail -n 10 /var/log/ipareplica-install.log [sudo] password for user: func(installer) File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/replicainstall.py\", line 424, in decorated func(installer) File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/replicainstall.py\", line 785, in promote_check ensure_enrolled(installer) File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/replicainstall.py\", line 740, in ensure_enrolled raise ScriptError(\"Configuration of client side components failed!\") 2020-05-28T18:24:51Z DEBUG The ipa-replica-install command failed, exception: ScriptError: Configuration of client side components failed! 2020-05-28T18:24:51Z ERROR Configuration of client side components failed! 2020-05-28T18:24:51Z ERROR The ipa-replica-install command failed. See /var/log/ipareplica-install.log for more information", "[user@replica ~]USD sudo less -N +G /var/log/ipareplica-install.log", "[user@replica ~]USD sudo less -N +G /var/log/ipareplica-conncheck.log [user@replica ~]USD sudo less -N +G /var/log/ipaclient-install.log [user@replica ~]USD sudo less -N +G /var/log/httpd/error_log [user@replica ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /access [user@replica ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /errors [user@replica ~]USD sudo less -N +G /var/log/ipaserver-install.log", "[user@server ~]USD sudo less -N +G /var/log/httpd/error_log [user@server ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /access [user@server ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /errors", "[user@server ~]USD sudo less -N +G /var/log/pki/pki-ca-spawn. 20200527185902 .log", "ipa-replica-install Your system may be partly configured. Run /usr/sbin/ipa-server-install --uninstall to clean up. IPA server is already configured on this system. If you want to reinstall the IPA server, please uninstall it first using 'ipa-server-install --uninstall'. The ipa-replica-install command failed. See /var/log/ipareplica-install.log for more information", "ipa-server-install --uninstall", "ipa server-del replica.idm.example.com", "[27/40]: setting up initial replication Starting replication, please wait until this has completed. Update in progress, 15 seconds elapsed [ldap://server.example.com:389] reports: Update failed! Status: [49 - LDAP error: Invalid credentials] [error] RuntimeError: Failed to start replication Your system may be partly configured. Run /usr/sbin/ipa-server-install --uninstall to clean up. ipa.ipapython.install.cli.install_tool(CompatServerReplicaInstall): ERROR Failed to start replication ipa.ipapython.install.cli.install_tool(CompatServerReplicaInstall): ERROR The ipa-replica-install command failed. See /var/log/ipareplica-install.log for more information", "[user@server ~]USD date Thu May 28 21:03:57 EDT 2020 [user@replica ~]USD sudo timedatectl set-time '2020-05-28 21:04:00'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/troubleshooting-idm-replica-installation_installing-identity-management
Appendix C. The X Window System
Appendix C. The X Window System While the heart of Red Hat Enterprise Linux is the kernel, for many users, the face of the operating system is the graphical environment provided by the X Window System , also called X . Other windowing environments have existed in the UNIX world, including some that predate the release of the X Window System in June 1984. Nonetheless, X has been the default graphical environment for most UNIX-like operating systems, including Red Hat Enterprise Linux, for many years. The graphical environment for Red Hat Enterprise Linux is supplied by the X.Org Foundation , an open source organization created to manage development and strategy for the X Window System and related technologies. X.Org is a large-scale, rapid-developing project with hundreds of developers around the world. It features a wide degree of support for a variety of hardware devices and architectures, and runs on myriad operating systems and platforms. The X Window System uses a client-server architecture. Its main purpose is to provide network transparent window system, which runs on a wide range of computing and graphics machines. The X server (the Xorg binary) listens for connections from X client applications via a network or local loopback interface. The server communicates with the hardware, such as the video card, monitor, keyboard, and mouse. X client applications exist in the user space, creating a graphical user interface ( GUI ) for the user and passing user requests to the X server. C.1. The X Server Red Hat Enterprise Linux 6 uses X server version, which includes several video drivers, EXA, and platform support enhancements over the release, among others. In addition, this release includes several automatic configuration features for the X server, as well as the generic input driver, evdev , that supports all input devices that the kernel knows about, including most mice and keyboards. X11R7.1 was the first release to take specific advantage of making the X Window System modular. This release split X into logically distinct modules, which make it easier for open source developers to contribute code to the system. In the current release, all libraries, headers, and binaries live under the /usr/ directory. The /etc/X11/ directory contains configuration files for X client and server applications. This includes configuration files for the X server itself, the X display managers, and many other base components. The configuration file for the newer Fontconfig-based font architecture is still /etc/fonts/fonts.conf . For more information on configuring and adding fonts, see Section C.4, "Fonts" . Because the X server performs advanced tasks on a wide array of hardware, it requires detailed information about the hardware it works on. The X server is able to automatically detect most of the hardware that it runs on and configure itself accordingly. Alternatively, hardware can be manually specified in configuration files. The Red Hat Enterprise Linux system installer, Anaconda, installs and configures X automatically, unless the X packages are not selected for installation. If there are any changes to the monitor, video card or other devices managed by the X server, most of the time, X detects and reconfigures these changes automatically. In rare cases, X must be reconfigured manually.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-the_x_window_system
8.158. pykickstart
8.158. pykickstart 8.158.1. RHBA-2013:1629 - pykickstart bug fix and enhancement update An updated pykickstart package that fixes several bugs and adds one enhancement is now available for Red Hat Enterprise Linux 6. The pykickstart package contains a python library for manipulating kickstart files. Bug Fixes BZ# 886010 When combining the autopart command with other kickstart partitioning commands in one kickstart file, installation proceeded unexpectedly with errors. The underlying source code has been modified so that the user is notified and installation is aborted with a parse error if the kickstart file contains invalid partitioning. BZ# 924579 When a kickstart file specified two logical volumes with the same name, installation failed with this inappropriate error message: AttributeError: 'LogVolData' object has no attribute 'device' With this update, duplicate names are detected correctly and the appropriate error message is displayed. BZ# 966183 When running a kickstart file that contained the "network" command with the "--ipv6" option specified, installation could terminate unexpectedly with the following message: TypeError: not all arguments converted during string formatting This update applies a patch to fix this bug and kickstart files work as expected in the described scenario. Enhancement BZ# 978252 This enhancement adds the ability to specify the "--ipv6gateway" option for the "network" command in kickstart files. As a result, both IPv4 and IPv6 default gateways can be specified for a network device configuration using the "network" command. Users of pykickstart are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/pykickstart
Preface
Preface The Red Hat Enterprise Linux 6.7 Technical Notes list and document the changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications between minor release Red Hat Enterprise Linux 6.6 and minor release Red Hat Enterprise Linux 6.7. For system administrators and others planning Red Hat Enterprise Linux 6.7 upgrades and deployments, the Technical Notes provide a single, organized record of the bugs fixed in, features added to, and Technology Previews included with this new release of Red Hat Enterprise Linux. For auditors and compliance officers, the Red Hat Enterprise Linux 6.7 Technical Notes provide a single, organized source for change tracking and compliance testing. For every user, the Red Hat Enterprise Linux 6.7 Technical Notes provide details of what has changed in this new release. Note The Package Manifest is available as a separate document.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/pref-test
10.5. Configure 802.1Q VLAN Tagging Using a GUI
10.5. Configure 802.1Q VLAN Tagging Using a GUI 10.5.1. Establishing a VLAN Connection You can use nm-connection-editor to create a VLAN using an existing interface as the parent interface. Note that VLAN devices are only created automatically if the parent interface is set to connect automatically. Procedure 10.1. Adding a New VLAN Connection Using nm-connection-editor Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select VLAN and click Create . The Editing VLAN connection 1 window appears. On the VLAN tab, select the parent interface from the drop-down list you want to use for the VLAN connection. Enter the VLAN ID Enter a VLAN interface name. This is the name of the VLAN interface that will be created. For example, enp1s0.1 or vlan2 . (Normally this is either the parent interface name plus " . " and the VLAN ID, or " vlan " plus the VLAN ID.) Review and confirm the settings and then click the Save button. To edit the VLAN-specific settings see Section 10.5.1.1, "Configuring the VLAN Tab" . Figure 10.3. Adding a New VLAN Connection Using nm-connection-editor Procedure 10.2. Editing an Existing VLAN Connection Follow these steps to edit an existing VLAN connection. Enter nm-connection-editor in a terminal: Select the connection you want to edit and click the Edit button. Select the General tab. Configure the connection name, auto-connect behavior, and availability settings. These settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the VLAN section of the Network window. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. Refer to the section called "Editing an Existing Connection with control-center" for more information. Available to all users - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. Refer to Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. To edit the VLAN-specific settings see Section 10.5.1.1, "Configuring the VLAN Tab" . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your VLAN connection, click the Save button to save your customized configuration. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" . or IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . 10.5.1.1. Configuring the VLAN Tab If you have already added a new VLAN connection (see Procedure 10.1, "Adding a New VLAN Connection Using nm-connection-editor" for instructions), you can edit the VLAN tab to set the parent interface and the VLAN ID. Parent Interface A previously configured interface can be selected in the drop-down list. VLAN ID The identification number to be used to tag the VLAN network traffic. VLAN interface name The name of the VLAN interface that will be created. For example, enp1s0.1 or vlan2 . Cloned MAC address Optionally sets an alternate MAC address to use for identifying the VLAN interface. This can be used to change the source MAC address for packets sent on this VLAN. MTU Optionally sets a Maximum Transmission Unit (MTU) size to be used for packets to be sent over the VLAN connection.
[ "~]USD nm-connection-editor", "~]USD nm-connection-editor" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configure_802_1q_vlan_tagging_using_a_gui
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/cli_guide/making-open-source-more-inclusive
Chapter 3. Ceph Monitor configuration
Chapter 3. Ceph Monitor configuration As a storage administrator, you can use the default configuration values for the Ceph Monitor or customize them according to the intended workload. Prerequisites Installation of the Red Hat Ceph Storage software. 3.1. Ceph Monitor configuration Understanding how to configure a Ceph Monitor is an important part of building a reliable Red Hat Ceph Storage cluster. All storage clusters have at least one monitor. A Ceph Monitor configuration usually remains fairly consistent, but you can add, remove or replace a Ceph Monitor in a storage cluster. Ceph monitors maintain a "master copy" of the cluster map. That means a Ceph client can determine the location of all Ceph monitors and Ceph OSDs just by connecting to one Ceph monitor and retrieving a current cluster map. Before Ceph clients can read from or write to Ceph OSDs, they must connect to a Ceph Monitor first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph client can compute the location for any object. The abilityto compute object locations allows a Ceph client to talk directly to Ceph OSDs, which is a very important aspect of Ceph's high scalability and performance. The primary role of the Ceph Monitor is to maintain a master copy of the cluster map. Ceph Monitors also provide authentication and logging services. Ceph Monitors write all changes in the monitor services to a single Paxos instance, and Paxos writes the changes to a key-value store for strong consistency. Ceph Monitors can query the most recent version of the cluster map during synchronization operations. Ceph Monitors leverage the key-value store's snapshots and iterators, using the rocksdb database, to perform store-wide synchronization. 3.2. Viewing the Ceph Monitor configuration database You can view Ceph Monitor configuration in the configuration database. Note releases of Red Hat Ceph Storage centralize Ceph Monitor configuration in /etc/ceph/ceph.conf . This configuration file has been deprecated as of Red Hat Ceph Storage 5. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor host. Procedure Log into the cephadm shell. Use the ceph config command to view the configuration database: Example Additional Resources For more information about the options available for the ceph config command, use ceph config -h . 3.3. Ceph cluster maps The cluster map is a composite of maps, including the monitor map, the OSD map, and the placement group map. The cluster map tracks a number of important events: Which processes are in the Red Hat Ceph Storage cluster. Which processes that are in the Red Hat Ceph Storage cluster are up and running or down . Whether, the placement groups are active or inactive , and clean or in some other state. other details that reflect the current state of the cluster such as: the total amount of storage space or the amount of storage used. When there is a significant change in the state of the cluster, for example, a Ceph OSD goes down, a placement group falls into a degraded state, and so on. The cluster map gets updated to reflect the current state of the cluster. Additionally, the Ceph monitor also maintains a history of the prior states of the cluster. The monitor map, OSD map, and placement group map each maintain a history of their map versions. Each version is called an epoch . When operating the Red Hat Ceph Storage cluster, keeping track of these states is an important part of the cluster administration. 3.4. Ceph Monitor quorum A cluster will run sufficiently with a single monitor. However, a single monitor is a single-point-of-failure. To ensure high availability in a production Ceph storage cluster, run Ceph with multiple monitors so that the failure of a single monitor will not cause a failure of the entire storage cluster. When a Ceph storage cluster runs multiple Ceph Monitors for high availability, Ceph Monitors use the Paxos algorithm to establish consensus about the master cluster map. A consensus requires a majority of monitors running to establish a quorum for consensus about the cluster map. For example, 1; 2 out of 3; 3 out of 5; 4 out of 6; and so on. Red Hat recommends running a production Red Hat Ceph Storage cluster with at least three Ceph Monitors to ensure high availability. When you run multiple monitors, you can specify the initial monitors that must be members of the storage cluster to establish a quorum. This may reduce the time it takes for the storage cluster to come online. Note A majority of the monitors in the storage cluster must be able to reach each other in to establish a quorum. You can decrease the initial number of monitors to establish a quorum with the mon_initial_members option. 3.5. Ceph Monitor consistency When you add monitor settings to the Ceph configuration file, you need to be aware of some of the architectural aspects of Ceph Monitors. Ceph imposes strict consistency requirements for a Ceph Monitor when discovering another Ceph Monitor within the cluster. Whereas Ceph clients and other Ceph daemons use the Ceph configuration file to discover monitors, monitors discover each other using the monitor map ( monmap ), not the Ceph configuration file. A Ceph Monitor always refers to the local copy of the monitor map when discovering other Ceph Monitors in the Red Hat Ceph Storage cluster. Using the monitor map instead of the Ceph configuration file avoids errors that could break the cluster. For example, typos in the Ceph configuration file when specifying a monitor address or port. Since monitors use monitor maps for discovery and they share monitor maps with clients and other Ceph daemons, the monitor map provides monitors with a strict guarantee that their consensus is valid. Strict consistency when applying updates to the monitor maps As with any other updates on the Ceph Monitor, changes to the monitor map always run through a distributed consensus algorithm called Paxos. The Ceph Monitors must agree on each update to the monitor map, such as adding or removing a Ceph Monitor, to ensure that each monitor in the quorum has the same version of the monitor map. Updates to the monitor map are incremental so that Ceph Monitors have the latest agreed-upon version and a set of versions. Maintaining history Maintaining a history enables a Ceph Monitor that has an older version of the monitor map to catch up with the current state of the Red Hat Ceph Storage cluster. If Ceph Monitors discovered each other through the Ceph configuration file instead of through the monitor map, it would introduce additional risks because the Ceph configuration files are not updated and distributed automatically. Ceph Monitors might inadvertently use an older Ceph configuration file, fail to recognize a Ceph Monitor, fall out of a quorum, or develop a situation where Paxos is not able to determine the current state of the system accurately. 3.6. Bootstrap the Ceph Monitor In most configuration and deployment cases, tools that deploy Ceph, such as cephadm , might help bootstrap the Ceph monitors by generating a monitor map for you. A Ceph monitor requires a few explicit settings: File System ID : The fsid is the unique identifier for your object store. Since you can run multiple storage clusters on the same hardware, you must specify the unique ID of the object store when bootstrapping a monitor. Using deployment tools, such as cephadm , will generate a file system identifier, but you can also specify the fsid manually. Monitor ID : A monitor ID is a unique ID assigned to each monitor within the cluster. By convention, the ID is set to the monitor's hostname. This option can be set using a deployment tool, using the ceph command, or in the Ceph configuration file. In the Ceph configuration file, sections are formed as follows: Example Keys : The monitor must have secret keys. Additional Resources For more information about cephadm and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide . 3.7. Minimum configuration for a Ceph Monitor The bare minimum monitor settings for a Ceph Monitor in the Ceph configuration file includes a host name for each monitor if it is not configured for DNS and the monitor address. The Ceph Monitors run on port 6789 and 3300 by default. Important Do not edit the Ceph configuration file. Note This minimum configuration for monitors assumes that a deployment tool generates the fsid and the mon. key for you. You can use the following commands to set or read the storage cluster configuration options. ceph config dump - Dumps the entire configuration database for the whole storage cluster. ceph config generate-minimal-conf - Generates a minimal ceph.conf file. ceph config get WHO - Dumps the configuration for a specific daemon or a client, as stored in the Ceph Monitor's configuration database. ceph config set WHO OPTION VALUE - Sets the configuration option in the Ceph Monitor's configuration database. ceph config show WHO - Shows the reported running configuration for a running daemon. ceph config assimilate-conf -i INPUT_FILE -o OUTPUT_FILE - Ingests a configuration file from the input file and moves any valid options into the Ceph Monitors' configuration database. Here, WHO parameter might be name of the section or a Ceph daemon, OPTION is a configuration file, and VALUE can be either true or false . Important When a Ceph daemon needs a config option prior to getting the option from the config store, you can set the configuration by running the following command: This command adds text to all the daemon's ceph.conf files. It is a workaround and is NOT a recommended operation. 3.8. Unique identifier for Ceph Each Red Hat Ceph Storage cluster has a unique identifier ( fsid ). If specified, it usually appears under the [global] section of the configuration file. Deployment tools usually generate the fsid and store it in the monitor map, so the value may not appear in a configuration file. The fsid makes it possible to run daemons for multiple clusters on the same hardware. Note Do not set this value if you use a deployment tool that does it for you. 3.9. Ceph Monitor data store Ceph provides a default path where Ceph monitors store data. Important Red Hat recommends running Ceph monitors on separate drives from Ceph OSDs for optimal performance in a production Red Hat Ceph Storage cluster. Note A dedicated /var/lib/ceph partition should be used for the MON database with a size between 50 and 100 GB. Ceph monitors call the fsync() function often, which can interfere with Ceph OSD workloads. Ceph monitors store their data as key-value pairs. Using a data store prevents recovering Ceph monitors from running corrupted versions through Paxos, and it enables multiple modification operations in one single atomic batch, among other advantages. Important Red Hat does not recommend changing the default data location. If you modify the default location, make it uniform across Ceph monitors by setting it in the [mon] section of the configuration file. 3.10. Ceph storage capacity When a Red Hat Ceph Storage cluster gets close to its maximum capacity (specifies by the mon_osd_full_ratio parameter), Ceph prevents you from writing to or reading from Ceph OSDs as a safety measure to prevent data loss. Therefore, letting a production Red Hat Ceph Storage cluster approach its full ratio is not a good practice, because it sacrifices high availability. The default full ratio is .95 , or 95% of capacity. This a very aggressive setting for a test cluster with a small number of OSDs. Tip When monitoring a cluster, be alert to warnings related to the nearfull ratio. This means that a failure of some OSDs could result in a temporary service disruption if one or more OSDs fails. Consider adding more OSDs to increase storage capacity. A common scenario for test clusters involves a system administrator removing a Ceph OSD from the Red Hat Ceph Storage cluster to watch the cluster re-balance. Then, removing another Ceph OSD, and so on until the Red Hat Ceph Storage cluster eventually reaches the full ratio and locks up. Important Red Hat recommends a bit of capacity planning even with a test cluster. Planning enables you to gauge how much spare capacity you will need in to maintain high availability. Ideally, you want to plan for a series of Ceph OSD failures where the cluster can recover to an active + clean state without replacing those Ceph OSDs immediately. You can run a cluster in an active + degraded state, but this is not ideal for normal operating conditions. The following diagram depicts a simplistic Red Hat Ceph Storage cluster containing 33 Ceph Nodes with one Ceph OSD per host, each Ceph OSD Daemon reading from and writing to a 3TB drive. So this exemplary Red Hat Ceph Storage cluster has a maximum actual capacity of 99TB. With a mon osd full ratio of 0.95 , if the Red Hat Ceph Storage cluster falls to 5 TB of remaining capacity, the cluster will not allow Ceph clients to read and write data. So the Red Hat Ceph Storage cluster's operating capacity is 95 TB, not 99 TB. It is normal in such a cluster for one or two OSDs to fail. A less frequent but reasonable scenario involves a rack's router or power supply failing, which brings down multiple OSDs simultaneously, for example, OSDs 7-12. In such a scenario, you should still strive for a cluster that can remain operational and achieve an active + clean state, even if that means adding a few hosts with additional OSDs in short order. If your capacity utilization is too high, you might not lose data, but you could still sacrifice data availability while resolving an outage within a failure domain if capacity utilization of the cluster exceeds the full ratio. For this reason, Red Hat recommends at least some rough capacity planning. Identify two numbers for your cluster: the number of OSDs the total capacity of the cluster To determine the mean average capacity of an OSD within a cluster, divide the total capacity of the cluster by the number of OSDs in the cluster. Consider multiplying that number by the number of OSDs you expect to fail simultaneously during normal operations (a relatively small number). Finally, multiply the capacity of the cluster by the full ratio to arrive at a maximum operating capacity. Then, subtract the amount of data from the OSDs you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing process with a higher number of OSD failures (for example, a rack of OSDs) to arrive at a reasonable number for a near full ratio. 3.11. Ceph heartbeat Ceph monitors know about the cluster by requiring reports from each OSD, and by receiving reports from OSDs about the status of their neighboring OSDs. Ceph provides reasonable default settings for interaction between monitor and OSD, however, you can modify them as needed. 3.12. Ceph Monitor synchronization role When you run a production cluster with multiple monitors which is recommended, each monitor checks to see if a neighboring monitor has a more recent version of the cluster map. For example, a map in a neighboring monitor with one or more epoch numbers higher than the most current epoch in the map of the instant monitor. Periodically, one monitor in the cluster might fall behind the other monitors to the point where it must leave the quorum, synchronize to retrieve the most current information about the cluster, and then rejoin the quorum. Synchronization roles For the purposes of synchronization, monitors can assume one of three roles: Leader : The Leader is the first monitor to achieve the most recent Paxos version of the cluster map. Provider : The Provider is a monitor that has the most recent version of the cluster map, but was not the first to achieve the most recent version. Requester: The Requester is a monitor that has fallen behind the leader and must synchronize to retrieve the most recent information about the cluster before it can rejoin the quorum. These roles enable a leader to delegate synchronization duties to a provider, which prevents synchronization requests from overloading the leader and improving performance. In the following diagram, the requester has learned that it has fallen behind the other monitors. The requester asks the leader to synchronize, and the leader tells the requester to synchronize with a provider. Monitor synchronization Synchronization always occurs when a new monitor joins the cluster. During runtime operations, monitors can receive updates to the cluster map at different times. This means the leader and provider roles may migrate from one monitor to another. If this happens while synchronizing, for example, a provider falls behind the leader, the provider can terminate synchronization with a requester. Once synchronization is complete, Ceph requires trimming across the cluster. Trimming requires that the placement groups are active + clean . 3.13. Ceph time synchronization Ceph daemons pass critical messages to each other, which must be processed before daemons reach a timeout threshold. If the clocks in Ceph monitors are not synchronized, it can lead to a number of anomalies. For example: Daemons ignoring received messages such as outdated timestamps. Timeouts triggered too soon or late when a message was not received in time. Tip Install NTP on the Ceph monitor hosts to ensure that the monitor cluster operates with synchronized clocks. Clock drift may still be noticeable with NTP even though the discrepancy is not yet harmful. Ceph clock drift and clock skew warnings can get triggered even though NTP maintains a reasonable level of synchronization. Increasing your clock drift may be tolerable under such circumstances. However, a number of factors such as workload, network latency, configuring overrides to default timeouts, and other synchronization options can influence the level of acceptable clock drift without compromising Paxos guarantees. Additional Resources See the section on Ceph time synchronization for more details. See all the Red Hat Ceph Storage Monitor configuration options in Ceph Monitor configuration options for specific option descriptions and usage.
[ "cephadm shell", "ceph config get mon", "[mon] mon_initial_members = a,b,c", "[mon.host1] [mon.host2]", "ceph cephadm set-extra-ceph-conf" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/ceph-monitor-configuration
Chapter 26. OVN-Kubernetes network plugin
Chapter 26. OVN-Kubernetes network plugin 26.1. About the OVN-Kubernetes network plugin The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Container Platform. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration. Note OVN-Kubernetes is the default networking solution for OpenShift Container Platform and single-node OpenShift deployments. OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to determine how packets travel through the network. For more information, see the Open Virtual Network website . OVN-Kubernetes is a series of daemons for OVS that translate virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device, allowing network administrators to configure, manage, and monitor the flow of network traffic. OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow . OVN supports distributed virtual routing, distributed logical switches, access control, DHCP and DNS. OVN implements distributed virtual routing within logic flows which equate to open flows. So for example if you have a pod that sends out a DHCP request on the network, it sends out that broadcast looking for DHCP address there will be a logic flow rule that matches that packet, and it responds giving it a gateway, a DNS server an IP address and so on. OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the network provider features; egress IPs, firewalls, routers, hybrid networking, IPSEC encryption, IPv6, network policy, network policy logs, hardware offloading and multicast. 26.1.1. OVN-Kubernetes purpose The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin: Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. Implements Kubernetes network policy support, including ingress and egress rules. Uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes. The OVN-Kubernetes network plugin provides the following advantages over OpenShift SDN. Full support for IPv6 single-stack and IPv4/IPv6 dual-stack networking on supported platforms Support for hybrid clusters with both Linux and Microsoft Windows workloads Optional IPsec encryption of intra-cluster communications Offload of network data processing from host CPU to compatible network cards and data processing units (DPUs) 26.1.2. Supported network plugin feature matrix Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins: Table 26.1. Default CNI network plugin feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall Supported Supported [1] Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption for intra-cluster communication Not supported Supported IPv4 single-stack Supported Supported IPv6 single-stack Not supported Supported [3] IPv4/IPv6 dual-stack Not Supported Supported [4] IPv6/IPv4 dual-stack Not supported Supported [5] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Hardware offloading Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 single-stack networking on a bare-metal platform. IPv4/IPv6 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), IBM Power(R), IBM Z(R), and RHOSP platforms. IPv6/IPv4 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), and IBM Power(R) platforms. 26.1.3. OVN-Kubernetes IPv6 and dual-stack limitations The OVN-Kubernetes network plugin has the following limitations: For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4 The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway. For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface The only resolution is to reconfigure the host networking so that both IP families contain the default gateway. 26.1.4. Session affinity Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client's IP address, see Session affinity . Stickiness timeout for session affinity The OVN-Kubernetes network plugin for OpenShift Container Platform calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter. Additional resources Configuring an egress firewall for a project About network policy Logging network policy events Enabling multicast for a project Configuring IPsec encryption Network [operator.openshift.io/v1] 26.2. OVN-Kubernetes architecture 26.2.1. Introduction to OVN-Kubernetes architecture The following diagram shows the OVN-Kubernetes architecture. Figure 26.1. OVK-Kubernetes architecture The key components are: Cloud Management System (CMS) - A platform specific client for OVN that provides a CMS specific plugin for OVN integration. The plugin translates the cloud management system's concept of the logical network configuration, stored in the CMS configuration database in a CMS-specific format, into an intermediate representation understood by OVN. OVN Northbound database ( nbdb ) container - Stores the logical network configuration passed by the CMS plugin. OVN Southbound database ( sbdb ) container - Stores the physical and logical network configuration state for Open vSwitch (OVS) system on each node, including tables that bind them. OVN north daemon ( ovn-northd ) - This is the intermediary client between nbdb container and sbdb container. It translates the logical network configuration in terms of conventional network concepts, taken from the nbdb container, into logical data path flows in the sbdb container. The container name for ovn-northd daemon is northd and it runs in the ovnkube-node pods. ovn-controller - This is the OVN agent that interacts with OVS and hypervisors, for any information or update that is needed for sbdb container. The ovn-controller reads logical flows from the sbdb container, translates them into OpenFlow flows and sends them to the node's OVS daemon. The container name is ovn-controller and it runs in the ovnkube-node pods. The OVN northd, northbound database, and southbound database run on each node in the cluster and mostly contain and process information that is local to that node. The OVN northbound database has the logical network configuration passed down to it by the cloud management system (CMS). The OVN northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. The ovn-northd ( northd container) connects to the OVN northbound database and the OVN southbound database. It translates the logical network configuration in terms of conventional network concepts, taken from the OVN northbound database, into logical data path flows in the OVN southbound database. The OVN southbound database has physical and logical representations of the network and binding tables that link them together. It contains the chassis information of the node and other constructs like remote transit switch ports that are required to connect to the other nodes in the cluster. The OVN southbound database also contains all the logic flows. The logic flows are shared with the ovn-controller process that runs on each node and the ovn-controller turns those into OpenFlow rules to program Open vSwitch (OVS). The Kubernetes control plane nodes contain two ovnkube-control-plane pods on separate nodes, which perform the central IP address management (IPAM) allocation for each node in the cluster. At any given time, a single ovnkube-control-plane pod is the leader. 26.2.2. Listing all resources in the OVN-Kubernetes project Finding the resources and containers that run in the OVN-Kubernetes project is important to help you understand the OVN-Kubernetes networking implementation. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure Run the following command to get all resources, endpoints, and ConfigMaps in the OVN-Kubernetes project: USD oc get all,ep,cm -n openshift-ovn-kubernetes Example output Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME READY STATUS RESTARTS AGE pod/ovnkube-control-plane-65c6f55656-6d55h 2/2 Running 0 114m pod/ovnkube-control-plane-65c6f55656-fd7vw 2/2 Running 2 (104m ago) 114m pod/ovnkube-node-bcvts 8/8 Running 0 113m pod/ovnkube-node-drgvv 8/8 Running 0 113m pod/ovnkube-node-f2pxt 8/8 Running 0 113m pod/ovnkube-node-frqsb 8/8 Running 0 105m pod/ovnkube-node-lbxkk 8/8 Running 0 105m pod/ovnkube-node-tt7bx 8/8 Running 1 (102m ago) 105m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-control-plane ClusterIP None <none> 9108/TCP 114m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 114m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-node 6 6 6 6 6 beta.kubernetes.io/os=linux 114m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ovnkube-control-plane 3/3 3 3 114m NAME DESIRED CURRENT READY AGE replicaset.apps/ovnkube-control-plane-65c6f55656 3 3 3 114m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-control-plane 10.0.0.3:9108,10.0.0.4:9108,10.0.0.5:9108 114m endpoints/ovn-kubernetes-node 10.0.0.3:9105,10.0.0.4:9105,10.0.0.5:9105 + 9 more... 114m NAME DATA AGE configmap/control-plane-status 1 113m configmap/kube-root-ca.crt 1 114m configmap/openshift-service-ca.crt 1 114m configmap/ovn-ca 1 114m configmap/ovnkube-config 1 114m configmap/signer-ca 1 114m There is one ovnkube-node pod for each node in the cluster. The ovnkube-config config map has the OpenShift Container Platform OVN-Kubernetes configurations. List all of the containers in the ovnkube-node pods by running the following command: USD oc get pods ovnkube-node-bcvts -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes Expected output ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller The ovnkube-node pod is made up of several containers. It is responsible for hosting the northbound database ( nbdb container), the southbound database ( sbdb container), the north daemon ( northd container), ovn-controller and the ovnkube-controller container. The ovnkube-controller container watches for API objects like pods, egress IPs, namespaces, services, endpoints, egress firewall, and network policies. It is also responsible for allocating pod IP from the available subnet pool for that node. List all the containers in the ovnkube-control-plane pods by running the following command: USD oc get pods ovnkube-control-plane-65c6f55656-6d55h -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes Expected output kube-rbac-proxy ovnkube-cluster-manager The ovnkube-control-plane pod has a container ( ovnkube-cluster-manager ) that resides on each OpenShift Container Platform node. The ovnkube-cluster-manager container allocates pod subnet, transit switch subnet IP and join switch subnet IP to each node in the cluster. The kube-rbac-proxy container monitors metrics for the ovnkube-cluster-manager container. 26.2.3. Listing the OVN-Kubernetes northbound database contents Each node is controlled by the ovnkube-controller container running in the ovnkube-node pod on that node. To understand the OVN logical networking entities you need to examine the northbound database that is running as a container inside the ovnkube-node pod on that node to see what objects are in the node you wish to see. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure To run ovn nbctl or sbctl commands in a cluster you must open a remote shell into the nbdb or sbdb containers on the relevant node List pods by running the following command: USD oc get po -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m Optional: To list the pods with node information, run the following command: USD oc get pods -n openshift-ovn-kubernetes -owide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none> Navigate into a pod to look at the northbound database by running the following command: USD oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2 Run the following command to show all the objects in the northbound database: USD ovn-nbctl show The output is too long to list here. The list includes the NAT rules, logical switches, load balancers and so on. You can narrow down and focus on specific components by using some of the following optional commands: Run the following command to show the list of logical routers: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c northd -- ovn-nbctl lr-list Example output 45339f4f-7d0b-41d0-b5f9-9fca9ce40ce6 (GR_ci-ln-t487nnb-72292-mdcnq-master-2) 96a0a0f0-e7ed-4fec-8393-3195563de1b8 (ovn_cluster_router) Note From this output you can see there is router on each node plus an ovn_cluster_router . Run the following command to show the list of logical switches: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb -- ovn-nbctl ls-list Example output bdd7dc3d-d848-4a74-b293-cc15128ea614 (ci-ln-t487nnb-72292-mdcnq-master-2) b349292d-ee03-4914-935f-1940b6cb91e5 (ext_ci-ln-t487nnb-72292-mdcnq-master-2) 0aac0754-ea32-4e33-b086-35eeabf0a140 (join) 992509d7-2c3f-4432-88db-c179e43592e5 (transit_switch) Note From this output you can see there is an ext switch for each node plus switches with the node name itself and a join switch. Run the following command to show the list of load balancers: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb -- ovn-nbctl lb-list Example output UUID LB PROTO VIP IPs 7c84c673-ed2a-4436-9a1f-9bc5dd181eea Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,169.254.169.2:6443,10.0.0.5:6443 4d663fd9-ddc8-4271-b333-4c0e279e20bb Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,10.0.0.4:6443,10.0.0.5:6443 292eb07f-b82f-4962-868a-4f541d250bca Service_openshif tcp 172.30.105.247:443 10.129.0.12:8443 034b5a7f-bb6a-45e9-8e6d-573a82dc5ee3 Service_openshif tcp 172.30.192.38:443 10.0.0.3:10259,10.0.0.4:10259,10.0.0.5:10259 a68bb53e-be84-48df-bd38-bdd82fcd4026 Service_openshif tcp 172.30.161.125:8443 10.129.0.32:8443 6cc21b3d-2c54-4c94-8ff5-d8e017269c2e Service_openshif tcp 172.30.3.144:443 10.129.0.22:8443 37996ffd-7268-4862-a27f-61cd62e09c32 Service_openshif tcp 172.30.181.107:443 10.129.0.18:8443 81d4da3c-f811-411f-ae0c-bc6713d0861d Service_openshif tcp 172.30.228.23:443 10.129.0.29:8443 ac5a4f3b-b6ba-4ceb-82d0-d84f2c41306e Service_openshif tcp 172.30.14.240:9443 10.129.0.36:9443 c88979fb-1ef5-414b-90ac-43b579351ac9 Service_openshif tcp 172.30.231.192:9001 10.128.0.5:9001,10.128.2.5:9001,10.129.0.5:9001,10.129.2.4:9001,10.130.0.3:9001,10.131.0.3:9001 fcb0a3fb-4a77-4230-a84a-be45dce757e8 Service_openshif tcp 172.30.189.92:443 10.130.0.17:8440 67ef3e7b-ceb9-4bf0-8d96-b43bde4c9151 Service_openshif tcp 172.30.67.218:443 10.129.0.9:8443 d0032fba-7d5e-424a-af25-4ab9b5d46e81 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,10.0.0.4:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,10.0.0.4:9979,10.0.0.5:9979 7361c537-3eec-4e6c-bc0c-0522d182abd4 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,10.0.0.4:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 0296c437-1259-410b-a6fd-81c310ad0af5 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,169.254.169.2:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 5d5679f5-45b8-479d-9f7c-08b123c688b8 Service_openshif tcp 172.30.38.253:17698 10.128.0.52:17698,10.129.0.84:17698,10.130.0.60:17698 2adcbab4-d1c9-447d-9573-b5dc9f2efbfa Service_openshif tcp 172.30.148.52:443 10.0.0.4:9202,10.0.0.5:9202 tcp 172.30.148.52:444 10.0.0.4:9203,10.0.0.5:9203 tcp 172.30.148.52:445 10.0.0.4:9204,10.0.0.5:9204 tcp 172.30.148.52:446 10.0.0.4:9205,10.0.0.5:9205 2a33a6d7-af1b-4892-87cc-326a380b809b Service_openshif tcp 172.30.67.219:9091 10.129.2.16:9091,10.131.0.16:9091 tcp 172.30.67.219:9092 10.129.2.16:9092,10.131.0.16:9092 tcp 172.30.67.219:9093 10.129.2.16:9093,10.131.0.16:9093 tcp 172.30.67.219:9094 10.129.2.16:9094,10.131.0.16:9094 f56f59d7-231a-4974-99b3-792e2741ec8d Service_openshif tcp 172.30.89.212:443 10.128.0.41:8443,10.129.0.68:8443,10.130.0.44:8443 08c2c6d7-d217-4b96-b5d8-c80c4e258116 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,169.254.169.2:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,169.254.169.2:9979,10.0.0.5:9979 60a69c56-fc6a-4de6-bd88-3f2af5ba5665 Service_openshif tcp 172.30.10.193:443 10.129.0.25:8443 ab1ef694-0826-4671-a22c-565fc2d282ec Service_openshif tcp 172.30.196.123:443 10.128.0.33:8443,10.129.0.64:8443,10.130.0.37:8443 b1fb34d3-0944-4770-9ee3-2683e7a630e2 Service_openshif tcp 172.30.158.93:8443 10.129.0.13:8443 95811c11-56e2-4877-be1e-c78ccb3a82a9 Service_openshif tcp 172.30.46.85:9001 10.130.0.16:9001 4baba1d1-b873-4535-884c-3f6fc07a50fd Service_openshif tcp 172.30.28.87:443 10.129.0.26:8443 6c2e1c90-f0ca-484e-8a8e-40e71442110a Service_openshif udp 172.30.0.10:53 10.128.0.13:5353,10.128.2.6:5353,10.129.0.39:5353,10.129.2.6:5353,10.130.0.11:5353,10.131.0.9:5353 Note From this truncated output you can see there are many OVN-Kubernetes load balancers. Load balancers in OVN-Kubernetes are representations of services. Run the following command to display the options available with the command ovn-nbctl : USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb ovn-nbctl --help 26.2.4. Command line arguments for ovn-nbctl to examine northbound database contents The following table describes the command line arguments that can be used with ovn-nbctl to examine the contents of the northbound database. Note Open a remote shell in the pod you want to view the contents of and then run the ovn-nbctl commands. Table 26.2. Command line arguments to examine northbound database contents Argument Description ovn-nbctl show An overview of the northbound database contents as seen from a specific node. ovn-nbctl show <switch_or_router> Show the details associated with the specified switch or router. ovn-nbctl lr-list Show the logical routers. ovn-nbctl lrp-list <router> Using the router information from ovn-nbctl lr-list to show the router ports. ovn-nbctl lr-nat-list <router> Show network address translation details for the specified router. ovn-nbctl ls-list Show the logical switches ovn-nbctl lsp-list <switch> Using the switch information from ovn-nbctl ls-list to show the switch port. ovn-nbctl lsp-get-type <port> Get the type for the logical port. ovn-nbctl lb-list Show the load balancers. 26.2.5. Listing the OVN-Kubernetes southbound database contents Each node is controlled by the ovnkube-controller container running in the ovnkube-node pod on that node. To understand the OVN logical networking entities you need to examine the northbound database that is running as a container inside the ovnkube-node pod on that node to see what objects are in the node you wish to see. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure To run ovn nbctl or sbctl commands in a cluster you must open a remote shell into the nbdb or sbdb containers on the relevant node List the pods by running the following command: USD oc get po -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m Optional: To list the pods with node information, run the following command: USD oc get pods -n openshift-ovn-kubernetes -owide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none> Navigate into a pod to look at the southbound database: USD oc rsh -c sbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2 Run the following command to show all the objects in the southbound database: USD ovn-sbctl show Example output Chassis "5db31703-35e9-413b-8cdf-69e7eecb41f7" hostname: ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Encap geneve ip: "10.0.128.4" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Chassis "070debed-99b7-4bce-b17d-17e720b7f8bc" hostname: ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Encap geneve ip: "10.0.128.2" options: {csum="true"} Port_Binding k8s-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding rtoe-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-monitoring_alertmanager-main-1 Port_Binding rtoj-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding etor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding cr-rtos-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-e2e-loki_loki-promtail-qcrcz Port_Binding jtor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-multus_network-metrics-daemon-mkd4t Port_Binding openshift-ingress-canary_ingress-canary-xtvj4 Port_Binding openshift-ingress_router-default-6c76cbc498-pvlqk Port_Binding openshift-dns_dns-default-zz582 Port_Binding openshift-monitoring_thanos-querier-57585899f5-lbf4f Port_Binding openshift-network-diagnostics_network-check-target-tn228 Port_Binding openshift-monitoring_prometheus-k8s-0 Port_Binding openshift-image-registry_image-registry-68899bd877-xqxjj Chassis "179ba069-0af1-401c-b044-e5ba90f60fea" hostname: ci-ln-9gp362t-72292-v2p94-master-0 Encap geneve ip: "10.0.0.5" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-0 Chassis "68c954f2-5a76-47be-9e84-1cb13bd9dab9" hostname: ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Encap geneve ip: "10.0.128.3" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Chassis "2de65d9e-9abf-4b6e-a51d-a1e038b4d8af" hostname: ci-ln-9gp362t-72292-v2p94-master-2 Encap geneve ip: "10.0.0.4" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-2 Chassis "1d371cb8-5e21-44fd-9025-c4b162cc4247" hostname: ci-ln-9gp362t-72292-v2p94-master-1 Encap geneve ip: "10.0.0.3" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-1 This detailed output shows the chassis and the ports that are attached to the chassis which in this case are all of the router ports and anything that runs like host networking. Any pods communicate out to the wider network using source network address translation (SNAT). Their IP address is translated into the IP address of the node that the pod is running on and then sent out into the network. In addition to the chassis information the southbound database has all the logic flows and those logic flows are then sent to the ovn-controller running on each of the nodes. The ovn-controller translates the logic flows into open flow rules and ultimately programs OpenvSwitch so that your pods can then follow open flow rules and make it out of the network. Run the following command to display the options available with the command ovn-sbctl : USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c sbdb ovn-sbctl --help 26.2.6. Command line arguments for ovn-sbctl to examine southbound database contents The following table describes the command line arguments that can be used with ovn-sbctl to examine the contents of the southbound database. Note Open a remote shell in the pod you wish to view the contents of and then run the ovn-sbctl commands. Table 26.3. Command line arguments to examine southbound database contents Argument Description ovn-sbctl show An overview of the southbound database contents as seen from a specific node. ovn-sbctl list Port_Binding <port> List the contents of southbound database for a the specified port . ovn-sbctl dump-flows List the logical flows. 26.2.7. OVN-Kubernetes logical architecture OVN is a network virtualization solution. It creates logical switches and routers. These switches and routers are interconnected to create any network topologies. When you run ovnkube-trace with the log level set to 2 or 5 the OVN-Kubernetes logical components are exposed. The following diagram shows how the routers and switches are connected in OpenShift Container Platform. Figure 26.2. OVN-Kubernetes router and switch components The key components involved in packet processing are: Gateway routers Gateway routers sometimes called L3 gateway routers, are typically used between the distributed routers and the physical network. Gateway routers including their logical patch ports are bound to a physical location (not distributed), or chassis. The patch ports on this router are known as l3gateway ports in the ovn-southbound database ( ovn-sbdb ). Distributed logical routers Distributed logical routers and the logical switches behind them, to which virtual machines and containers attach, effectively reside on each hypervisor. Join local switch Join local switches are used to connect the distributed router and gateway routers. It reduces the number of IP addresses needed on the distributed router. Logical switches with patch ports Logical switches with patch ports are used to virtualize the network stack. They connect remote logical ports through tunnels. Logical switches with localnet ports Logical switches with localnet ports are used to connect OVN to the physical network. They connect remote logical ports by bridging the packets to directly connected physical L2 segments using localnet ports. Patch ports Patch ports represent connectivity between logical switches and logical routers and between peer logical routers. A single connection has a pair of patch ports at each such point of connectivity, one on each side. l3gateway ports l3gateway ports are the port binding entries in the ovn-sbdb for logical patch ports used in the gateway routers. They are called l3gateway ports rather than patch ports just to portray the fact that these ports are bound to a chassis just like the gateway router itself. localnet ports localnet ports are present on the bridged logical switches that allows a connection to a locally accessible network from each ovn-controller instance. This helps model the direct connectivity to the physical network from the logical switches. A logical switch can only have a single localnet port attached to it. 26.2.7.1. Installing network-tools on local host Install network-tools on your local host to make a collection of tools available for debugging OpenShift Container Platform cluster network issues. Procedure Clone the network-tools repository onto your workstation with the following command: USD git clone [email protected]:openshift/network-tools.git Change into the directory for the repository you just cloned: USD cd network-tools Optional: List all available commands: USD ./debug-scripts/network-tools -h 26.2.7.2. Running network-tools Get information about the logical switches and routers by running network-tools . Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You have installed network-tools on local host. Procedure List the routers by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list Example output 944a7b53-7948-4ad2-a494-82b55eeccf87 (GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99) 84bd4a4c-4b0b-4a47-b0cf-a2c32709fc53 (ovn_cluster_router) List the localnet ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=localnet Example output _uuid : d05298f5-805b-4838-9224-1211afc2f199 additional_chassis : [] additional_encap : [] chassis : [] datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [unknown] mirror_rules : [] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...] List the l3gateway ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=l3gateway Example output _uuid : 5207a1f3-1cf3-42f1-83e9-387bbb06b03c additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : ["42:01:0a:00:80:04"] mirror_rules : [] nat_addresses : ["42:01:0a:00:80:04 10.0.128.4"] options : {l3gateway-chassis="84737c36-b383-4c83-92c5-2bd5b3c7e772", peer=rtoe-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : 6088d647-84f2-43f2-b53f-c9d379042679 additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : dc9cea00-d94a-41b8-bdb0-89d42d13aa2e encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [] options : {l3gateway-chassis="84737c36-b383-4c83-92c5-2bd5b3c7e772", peer=rtoj-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : l3gateway up : true virtual_parent : [] [...] List the patch ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=patch Example output _uuid : 785fb8b6-ee5a-4792-a415-5b1cb855dac2 additional_chassis : [] additional_encap : [] chassis : [] datapath : f1ddd1cc-dc0d-43b4-90ca-12651305acec encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : stor-ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : ["0a:58:0a:80:02:01 10.128.2.1 is_chassis_resident(\"cr-rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99\")"] options : {peer=rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] _uuid : c01ff587-21a5-40b4-8244-4cd0425e5d9a additional_chassis : [] additional_encap : [] chassis : [] datapath : f6795586-bf92-4f84-9222-efe4ac6a7734 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtoj-ovn_cluster_router mac : ["0a:58:64:40:00:01 100.64.0.1/16"] mirror_rules : [] nat_addresses : [] options : {peer=jtor-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] [...] 26.2.8. Additional resources Tracing Openflow with ovnkube-trace OVN architecture ovn-nbctl linux manual page ovn-sbctl linux manual page 26.3. Troubleshooting OVN-Kubernetes OVN-Kubernetes has many sources of built-in health checks and logs. Follow the instructions in these sections to examine your cluster. If a support case is necessary, follow the support guide to collect additional information through a must-gather . Only use the -- gather_network_logs when instructed by support. 26.3.1. Monitoring OVN-Kubernetes health by using readiness probes The ovnkube-control-plane and ovnkube-node pods have containers configured with readiness probes. Prerequisites Access to the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. You have installed jq . Procedure Review the details of the ovnkube-node readiness probe by running the following command: USD oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe' The readiness probe for the northbound and southbound database containers in the ovnkube-node pod checks for the health of the databases and the ovnkube-controller container. The ovnkube-controller container in the ovnkube-node pod has a readiness probe to verify the presence of the OVN-Kubernetes CNI configuration file, the absence of which would indicate that the pod is not running or is not ready to accept requests to configure pods. Show all events including the probe failures, for the namespace by using the following command: USD oc get events -n openshift-ovn-kubernetes Show the events for just a specific pod: USD oc describe pod ovnkube-node-9lqfk -n openshift-ovn-kubernetes Show the messages and statuses from the cluster network operator: USD oc get co/network -o json | jq '.status.conditions[]' Show the ready status of each container in ovnkube-node pods by running the following script: USD for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === USDp ===; \ oc get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; \ done Note The expectation is all container statuses are reporting as true . Failure of a readiness probe sets the status to false . Additional resources Monitoring application health by using health checks 26.3.2. Viewing OVN-Kubernetes alerts in the console The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. Procedure (UI) In the Administrator perspective, select Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting Rules pages. View the rules for OVN-Kubernetes alerts by selecting Observe Alerting Alerting Rules . 26.3.3. Viewing OVN-Kubernetes alerts in the CLI You can get information about alerts and their governing alerting rules and silences from the command line. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. You have installed jq . Procedure View active or firing alerts by running the following commands. Set the alert manager route environment variable by running the following command: USD ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring \ -o jsonpath='{@.spec.host}') Issue a curl request to the alert manager route API by running the following command, replacing USDALERT_MANAGER with the URL of your Alertmanager instance: USD curl -s -k -H "Authorization: Bearer USD(oc create token prometheus-k8s -n openshift-monitoring)" https://USDALERT_MANAGER/api/v1/alerts | jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"' View alerting rules by running the following command: USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")' 26.3.4. Viewing the OVN-Kubernetes logs using the CLI You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods using the OpenShift CLI ( oc ). Prerequisites Access to the cluster as a user with the cluster-admin role. Access to the OpenShift CLI ( oc ). You have installed jq . Procedure View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> -n <namespace> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. <namespace> Specify the namespace the pod is running in. For example: USD oc logs ovnkube-node-5dx44 -n openshift-ovn-kubernetes USD oc logs -f ovnkube-node-5dx44 -c ovnkube-controller -n openshift-ovn-kubernetes The contents of log files are printed out. Examine the most recent entries in all the containers in the ovnkube-node pods: USD for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \ do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp \ -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; \ oc logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done View the last 5 lines of every log in every container in an ovnkube-node pod using the following command: USD oc logs -l app=ovnkube-node -n openshift-ovn-kubernetes --all-containers --tail 5 26.3.5. Viewing the OVN-Kubernetes logs using the web console You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods in the web console. Prerequisites Access to the OpenShift CLI ( oc ). Procedure In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Select the openshift-ovn-kubernetes project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . By default for the ovnkube-master the logs associated with the northd container are displayed. Use the down-down menu to select logs for each container in turn. 26.3.5.1. Changing the OVN-Kubernetes log levels The default log level for OVN-Kubernetes is 4. To debug OVN-Kubernetes, set the log level to 5. Follow this procedure to increase the log level of the OVN-Kubernetes to help you debug an issue. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Run the following command to get detailed information for all pods in the OVN-Kubernetes project: USD oc get po -o wide -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-65497d4548-9ptdr 2/2 Running 2 (128m ago) 147m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-control-plane-65497d4548-j6zfk 2/2 Running 0 147m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-5dx44 8/8 Running 0 146m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-node-dpfn4 8/8 Running 0 146m 10.0.0.4 ci-ln-3njdr9b-72292-5nwkp-master-1 <none> <none> ovnkube-node-kwc9l 8/8 Running 0 134m 10.0.128.2 ci-ln-3njdr9b-72292-5nwkp-worker-a-2fjcj <none> <none> ovnkube-node-mcrhl 8/8 Running 0 134m 10.0.128.4 ci-ln-3njdr9b-72292-5nwkp-worker-c-v9x5v <none> <none> ovnkube-node-nsct4 8/8 Running 0 146m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-zrj9f 8/8 Running 0 134m 10.0.128.3 ci-ln-3njdr9b-72292-5nwkp-worker-b-v78h7 <none> <none> Create a ConfigMap file similar to the following example and use a filename such as env-overrides.yaml : Example ConfigMap file kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ci-ln-3njdr9b-72292-5nwkp-master-0: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ci-ln-3njdr9b-72292-5nwkp-master-2: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg 1 Specify the name of the node you want to set the debug log level on. 2 Specify _master to set the log levels of ovnkube-master components. Apply the ConfigMap file by using the following command: USD oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml Example output configmap/env-overrides.yaml created Restart the ovnkube pods to apply the new log level by using the following commands: USD oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-0 -l app=ovnkube-node USD oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-2 -l app=ovnkube-node USD oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-node To verify that the `ConfigMap`file has been applied to all nodes for a specific pod, run the following command: USD oc logs -n openshift-ovn-kubernetes --all-containers --prefix ovnkube-node-<xxxx> | grep -E -m 10 '(Logging config:|vconsole|DBG)' where: <XXXX> Specifies the random sequence of letters for a pod from the step. Example output [pod/ovnkube-node-2cpjc/sbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb [pod/ovnkube-node-2cpjc/ovnkube-controller] I1012 14:39:59.984506 35767 config.go:2247] Logging config: {File: CNIFile:/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:5 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} [pod/ovnkube-node-2cpjc/northd] + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 [pod/ovnkube-node-2cpjc/nbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.552Z|00002|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00003|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (64 nodes total across 64 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00004|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 7 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00005|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering BACKOFF [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00007|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering CONNECTING [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00008|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: SERVER_SCHEMA_REQUESTED -> SERVER_SCHEMA_REQUESTED at lib/ovsdb-cs.c:423 Optional: Check the ConfigMap file has been applied by running the following command: for f in USD(oc -n openshift-ovn-kubernetes get po -l 'app=ovnkube-node' --no-headers -o custom-columns=N:.metadata.name) ; do echo "---- USDf ----" ; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDf -- pgrep -a -f init-ovnkube-controller | grep -P -o '^.*loglevel\s+\d' ; done Example output ---- ovnkube-node-2dt57 ---- 60981 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-c-vmh5n.c.openshift-qe.internal --init-node xpst8-worker-c-vmh5n.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-4zznh ---- 178034 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-2.c.openshift-qe.internal --init-node xpst8-master-2.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-548sx ---- 77499 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-a-fjtnb.c.openshift-qe.internal --init-node xpst8-worker-a-fjtnb.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-6btrf ---- 73781 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-b-p8rww.c.openshift-qe.internal --init-node xpst8-worker-b-p8rww.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-fkc9r ---- 130707 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-0.c.openshift-qe.internal --init-node xpst8-master-0.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 5 ---- ovnkube-node-tk9l4 ---- 181328 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-1.c.openshift-qe.internal --init-node xpst8-master-1.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 26.3.6. Checking the OVN-Kubernetes pod network connectivity The connectivity check controller, in OpenShift Container Platform 4.10 and later, orchestrates connection verification checks in your cluster. These include Kubernetes API, OpenShift API and individual nodes. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. Prerequisites Access to the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. You have installed jq . Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics View the most recent success for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]' View the most recent failures for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]' View the most recent outages for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]' The connectivity check controller also logs metrics from these checks into Prometheus. View all the metrics by running the following command: USD oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}' View the latency between the source pod and the openshift api service for the last 5 minutes: USD oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}' 26.3.7. Additional resources Gathering data about your cluster for Red Hat Support Implementation of connection health checks Verifying network connectivity for an endpoint 26.4. OVN-Kubernetes network policy Important The AdminNetworkPolicy resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Kubernetes offers two features that users can use to enforce network security. One feature that allows users to enforce network policy is the NetworkPolicy API that is designed mainly for application developers and namespace tenants to protect their namespaces by creating namespace-scoped policies. For more information, see About network policy . The second feature is AdminNetworkPolicy which is comprised of two API: the AdminNetworkPolicy (ANP) API and the BaselineAdminNetworkPolicy (BANP) API. ANP and BANP are designed for cluster and network administrators to protect their entire cluster by creating cluster-scoped policies. Cluster administrators can use ANPs to enforce non-overridable policies that take precedence over NetworkPolicy objects. Administrators can use BANP to setup and enforce optional cluster-scoped network policy rules that are overridable by users using NetworkPolicy objects if need be. When used together ANP and BANP can create multi-tenancy policy that administrators can use to secure their cluster. OVN-Kubernetes CNI in OpenShift Container Platform implements these network policies using Access Control List (ACLs) Tiers to evaluate and apply them. ACLs are evaluated in descending order from Tier 1 to Tier 3. Tier 1 evaluates AdminNetworkPolicy (ANP) objects. Tier 2 evaluates NetworkPolicy objects. Tier 3 evaluates BaselineAdminNetworkPolicy (BANP) objects. Figure 26.3. OVK-Kubernetes Access Control List (ACL) If traffic matches an ANP rule, the rules in that ANP will be evaluated first. If the match is an ANP allow or deny rule, any existing NetworkPolicies and BaselineAdminNetworkPolicy (BANP) in the cluster will be intentionally skipped from evaluation. If the match is an ANP pass rule, then evaluation moves from tier 1 of the ACLs to tier 2 where the NetworkPolicy policy is evaluated. 26.4.1. AdminNetworkPolicy An AdminNetworkPolicy (ANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use ANP to secure your network by creating network policies before creating namespaces. Additionally, you can create network policies on a cluster-scoped level that is non-overridable by NetworkPolicy objects. The key difference between AdminNetworkPolicy and NetworkPolicy objects are that the former is for administrators and is cluster scoped while the latter is for tenant owners and is namespace scoped. An ANP allows administrators to specify the following: A priority value that determines the order of its evaluation. The lower the value the higher the precedence. A subject that consists of a set of namespaces or namespace.. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . Note The AdminNetworkPolicy resource is a TechnologyPreviewNoUpgrade feature that can be enabled on test clusters that are not in production. For more information on feature gates and TechnologyPreviewNoUpgrade features, see "Enabling features using feature gates" in the "Additional resources" of this section. AdminNetworkPolicy example Example 26.1. Example YAML file for an ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: "deny-all-ingress-tenant-1" 5 action: "Deny" from: - pods: namespaces: 6 namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 7 egress: 8 - name: "pass-all-egress-to-tenant-1" action: "Pass" to: - pods: namespaces: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 1 Specify a name for your ANP. 2 The spec.priority field supports a maximum of 100 ANP in the values of 0-99 in a cluster. The lower the value the higher the precedence. Creating AdminNetworkPolicy with the same priority creates a nondeterministic outcome. 3 Specify the namespace to apply the ANP resource. 4 ANP have both ingress and egress rules. ANP rules for spec.ingress field accepts values of Pass , Deny , and Allow for the action field. 5 Specify a name for the ingress.name . 6 Specify the namespaces to select the pods from to apply the ANP resource. 7 Specify podSelector.matchLabels name of the pods to apply the ANP resource. 8 ANP have both ingress and egress rules. ANP rules for spec.egress field accepts values of Pass , Deny , and Allow for the action field. Additional resources Enabling features using feature gates Network Policy API Working Group 26.4.1.1. AdminNetworkPolicy actions for rules As an administrator, you can set Allow , Deny , or Pass as the action field for your AdminNetworkPolicy rules. Because OVN-Kubernetes uses a tiered ACLs to evaluate network traffic rules, ANP allow you to set very strong policy rules that can only be changed by an administrator modifying them, deleting the rule, or overriding them by setting a higher priority rule. AdminNetworkPolicy Allow example The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the monitoring namespace towards any tenant (all other namespaces) in the cluster. Example 26.2. Example YAML file for a strong Allow ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} ingress: - name: "allow-ingress-from-monitoring" action: "Allow" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is an example of a strong Allow ANP because it is non-overridable by all the parties involved. No tenants can block themselves from being monitored using NetworkPolicy objects and the monitoring tenant also has no say in what it can or cannot monitor. AdminNetworkPolicy Deny example The following ANP that is defined at priority 5 ensures all ingress traffic from the monitoring namespace is blocked towards restricted tenants (namespaces that have labels security: restricted ). Example 26.3. Example YAML file for a strong Deny ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is a strong Deny ANP that is non-overridable by all the parties involved. The restricted tenant owners cannot authorize themselves to allow monitoring traffic, and the infrastructure's monitoring service cannot scrape anything from these sensitive namespaces. When combined with the strong Allow example, the block-monitoring ANP has a lower priority value giving it higher precedence, which ensures restricted tenants are never monitored. AdminNetworkPolicy Pass example TThe following ANP that is defined at priority 7 ensures all ingress traffic from the monitoring namespace towards internal infrastructure tenants (namespaces that have labels security: internal ) are passed on to tier 2 of the ACLs and evaluated by the namespaces' NetworkPolicy objects. Example 26.4. Example YAML file for a strong Pass ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: "pass-ingress-from-monitoring" action: "Pass" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... This example is a strong Pass action ANP because it delegates the decision to NetworkPolicy objects defined by tenant owners. This pass-monitoring ANP allows all tenant owners grouped at security level internal to choose if their metrics should be scraped by the infrastructures' monitoring service using namespace scoped NetworkPolicy objects. 26.4.2. BaselineAdminNetworkPolicy BaselineAdminNetworkPolicy (BANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use BANP to setup and enforce optional baseline network policy rules that are overridable by users using NetworkPolicy objects if need be. Rule actions for BANP are allow or deny . The BaselineAdminNetworkPolicy resource is a cluster singleton object that can be used as a guardrail policy incase a passed traffic policy does not match any NetworkPolicy objects in the cluster. A BANP can also be used as a default security model that provides guardrails that intra-cluster traffic is blocked by default and a user will need to use NetworkPolicy objects to allow known traffic. You must use default as the name when creating a BANP resource. A BANP allows administrators to specify: A subject that consists of a set of namespaces or namespace. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . Note BaselineAdminNetworkPolicy is a TechnologyPreviewNoUpgrade feature that can be enabled on test clusters that are not in production. BaselineAdminNetworkPolicy example Example 26.5. Example YAML file for BANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: "deny-all-ingress-from-tenant-1" 4 action: "Deny" from: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: "allow-all-egress-to-tenant-1" action: "Allow" to: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1 1 The policy name must be default because BANP is a singleton object. 2 Specify the namespace to apply the ANP to. 3 BANP have both ingress and egress rules. BANP rules for spec.ingress and spec.egress fields accepts values of Deny and Allow for the action field. 4 Specify a name for the ingress.name 5 Specify the namespaces to select the pods from to apply the BANP resource. 6 Specify podSelector.matchLabels name of the pods to apply the BANP resource. BaselineAdminNetworkPolicy Deny example The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at internal security level. When combined with the "AdminNetworkPolicy Pass example", this deny policy acts as a guardrail policy for all ingress traffic that is passed by the ANP pass-monitoring policy. Example 26.6. Example YAML file for a guardrail Deny rule apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... You can use an AdminNetworkPolicy resource with a Pass value for the action field in conjunction with the BaselineAdminNetworkPolicy resource to create a multi-tenant policy. This multi-tenant policy allows one tenant to collect monitoring data on their application while simultaneously not collecting data from a second tenant. As an administrator, if you apply both the "AdminNetworkPolicy Pass action example" and the "BaselineAdminNetwork Policy Deny example", tenants are then left with the ability to choose to create a NetworkPolicy resource that will be evaluated before the BANP. For example, Tenant 1 can set up the following NetworkPolicy resource to monitor ingress traffic: Example 26.7. Example NetworkPolicy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... In this scenario, Tenant 1's policy would be evaluated after the "AdminNetworkPolicy Pass action example" and before the "BaselineAdminNetwork Policy Deny example", which denies all ingress monitoring traffic coming into tenants with security level internal . With Tenant 1's NetworkPolicy object in place, they will be able to collect data on their application. Tenant 2, however, who does not have any NetworkPolicy objects in place, will not be able to collect data. As an administrator, you have not by default monitored internal tenants, but instead, you created a BANP that allows tenants to use NetworkPolicy objects to override the default behavior of your BANP. 26.5. Tracing Openflow with ovnkube-trace OVN and OVS traffic flows can be simulated in a single utility called ovnkube-trace . The ovnkube-trace utility runs ovn-trace , ovs-appctl ofproto/trace and ovn-detrace and correlates that information in a single output. You can execute the ovnkube-trace binary from a dedicated container. For releases after OpenShift Container Platform 4.7, you can also copy the binary to a local host and execute it from that host. 26.5.1. Installing the ovnkube-trace on local host The ovnkube-trace tool traces packet simulations for arbitrary UDP or TCP traffic between points in an OVN-Kubernetes driven OpenShift Container Platform cluster. Copy the ovnkube-trace binary to your local host making it available to run against the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create a pod variable by using the following command: USD POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-control-plane -o name | head -1 | awk -F '/' '{print USDNF}') Run the following command on your local host to copy the binary from the ovnkube-control-plane pods: USD oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace -c ovnkube-cluster-manager ovnkube-trace Note If you are using Red Hat Enterprise Linux (RHEL) 8 to run the ovnkube-trace tool, you must copy the file /usr/lib/rhel8/ovnkube-trace to your local host. Make ovnkube-trace executable by running the following command: USD chmod +x ovnkube-trace Display the options available with ovnkube-trace by running the following command: USD ./ovnkube-trace -help Expected output Usage of ./ovnkube-trace: -addr-family string Address family (ip4 or ip6) to be used for tracing (default "ip4") -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default "default") -dst-port string dst-port: destination port (default "80") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default "0") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default "default") -tcp use tcp transport protocol -udp use udp transport protocol The command-line arguments supported are familiar Kubernetes constructs, such as namespaces, pods, services so you do not need to find the MAC address, the IP address of the destination nodes, or the ICMP type. The log levels are: 0 (minimal output) 2 (more verbose output showing results of trace commands) 5 (debug output) 26.5.2. Running ovnkube-trace Run ovn-trace to simulate packet forwarding within an OVN logical network. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You have installed ovnkube-trace on local host Example: Testing that DNS resolution works from a deployed pod This example illustrates how to test the DNS resolution from a deployed pod to the core DNS pod that runs in the cluster. Procedure Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80 List the pods running in the openshift-dns namespace: oc get pods -n openshift-dns Example output NAME READY STATUS RESTARTS AGE dns-default-8s42x 2/2 Running 0 5h8m dns-default-mdw6r 2/2 Running 0 4h58m dns-default-p8t5h 2/2 Running 0 4h58m dns-default-rl6nk 2/2 Running 0 5h8m dns-default-xbgqx 2/2 Running 0 5h8m dns-default-zv8f6 2/2 Running 0 4h58m node-resolver-62jjb 1/1 Running 0 5h8m node-resolver-8z4cj 1/1 Running 0 4h59m node-resolver-bq244 1/1 Running 0 5h8m node-resolver-hc58n 1/1 Running 0 4h59m node-resolver-lm6z4 1/1 Running 0 5h8m node-resolver-zfx5k 1/1 Running 0 5h Run the following ovnkube-trace command to verify DNS resolution is working: USD ./ovnkube-trace \ -src-namespace default \ 1 -src web \ 2 -dst-namespace openshift-dns \ 3 -dst dns-default-p8t5h \ 4 -udp -dst-port 53 \ 5 -loglevel 0 6 1 Namespace of the source pod 2 Source pod name 3 Namespace of destination pod 4 Destination pod name 5 Use the udp transport protocol. Port 53 is the port the DNS service uses. 6 Set the log level to 0 (0 is minimal and 5 is debug) Example output if the src&dst pod lands on the same node: ovn-trace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-trace destination pod to source pod indicates success from dns-default-p8t5h to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-p8t5h ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-p8t5h to web ovn-detrace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-detrace destination pod to source pod indicates success from dns-default-p8t5h to web Example output if the src&dst pod lands on a different node: ovn-trace source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace (remote) source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-trace (remote) destination pod to source pod indicates success from dns-default-8s42x to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-8s42x ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-detrace source pod to destination pod indicates success from web to dns-default-8s42x ovn-detrace destination pod to source pod indicates success from dns-default-8s42x to web The ouput indicates success from the deployed pod to the DNS port and also indicates that it is successful going back in the other direction. So you know bi-directional traffic is supported on UDP port 53 if my web pod wants to do dns resolution from core DNS. If for example that did not work and you wanted to get the ovn-trace , the ovs-appctl of proto/trace and ovn-detrace , and more debug type information increase the log level to 2 and run the command again as follows: USD ./ovnkube-trace \ -src-namespace default \ -src web \ -dst-namespace openshift-dns \ -dst dns-default-467qw \ -udp -dst-port 53 \ -loglevel 2 The output from this increased log level is too much to list here. In a failure situation the output of this command shows which flow is dropping that traffic. For example an egress or ingress network policy may be configured on the cluster that does not allow that traffic. Example: Verifying by using debug output a configured default deny This example illustrates how to identify by using the debug output that an ingress default deny policy blocks traffic. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: [] Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh Open another terminal session. In this new terminal session run ovn-trace to verify the failure in communication between the source pod test-6459 running in namespace prod and destination pod running in the default namespace: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0 Example output ovn-trace source pod to destination pod indicates failure from test-6459 to web Increase the log level to 2 to expose the reason for the failure by running the following command: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 2 Example output ... ------------------------------------------------ 3. ls_out_acl_hint (northd.c:7454): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 12efc456 reg0[8] = 1; reg0[10] = 1; ; 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 0, priority 500, uuid 69372c5d reg8[30..31] = 1; (4); 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 1, priority 500, uuid 2fa0af89 reg8[30..31] = 2; (4); 4. ls_out_acl_eval (northd.c:7691): reg8[30..31] == 2 && reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid 447d0dab reg8[17] = 1; ct_commit { ct_mark.blocked = 1; }; 1 ; ... 1 Ingress traffic is blocked due to the default deny policy being in place. Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Run ovnkube-trace to verify that traffic is now allowed by entering the following command: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0 Expected output ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459 Run the following command in the shell that was opened in step six to connect nginx to the web-server: wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 26.5.3. Additional resources Tracing Openflow with ovnkube-trace utility ovnkube-trace 26.6. Migrating from the OpenShift SDN network plugin As a cluster administrator, you can migrate to the OVN-Kubernetes network plugin from the OpenShift software-defined networking (SDN) plugin. The following methods exist for migrating from the OpenShift SDN network plugin to the OVN-Kubernetes plugin: Limited live migration (preferred method) This is an automated process that migrates your cluster from OpenShift SDN to OVN-Kubernetes. Offline migration This is a manual process that includes some downtime. This method is primarily used for self-managed OpenShift Container Platform deployments, and consider using this method when you cannot perform a limited live migration to the OVN-Kubernetes network plugin. Warning For the limited live migration method only, do not automate the migration from OpenShift SDN to OVN-Kubernetes with a script or another tool such as Red Hat Ansible Automation Platform. This might cause outages or crash your OpenShift Container Platform cluster. Additional resources About the OVN-Kubernetes network plugin 26.6.1. Offline migration to the OVN-Kubernetes network plugin overview The offline migration method is a manual process that includes some downtime, during which your cluster is unreachable. This method is primarily used for self-managed OpenShift Container Platform deployments, and is an alternative to the limited live migration procedure. It should be used in the event that you cannot perform a limited live migration to the OVN-Kubernetes network plugin. Although a rollback procedure is provided, the offline migration is intended to be a one-way process. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. The following sections provide more information about the offline migration method. 26.6.1.1. Supported platforms when using the offline migration method The following table provides information about the supported platforms for the offline migration type. Table 26.4. Supported platforms for the offline migration method Platform Offline Migration Bare metal hardware (IPI and UPI) [✓] Amazon Web Services (AWS) (IPI and UPI) [✓] Google Cloud Platform (GCP) (IPI and UPI) [✓] IBM Cloud(R) (IPI and UPI) [✓] Microsoft Azure (IPI and UPI) [✓] Red Hat OpenStack Platform (RHOSP) (IPI and UPI) [✓] VMware vSphere (IPI and UPI) [✓] AliCloud (IPI and UPI) [✓] Nutanix (IPI and UPI) [✓] 26.6.1.2. Considerations for offline migration to the OVN-Kubernetes network plugin If you have more than 150 nodes in your OpenShift Container Platform cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin. The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration. While the OVN-Kubernetes network plugin implements many of the capabilities present in the OpenShift SDN network plugin, the configuration is not the same. If your cluster uses any of the following OpenShift SDN network plugin capabilities, you must manually configure the same capability in the OVN-Kubernetes network plugin: Namespace isolation Egress router pods Before migrating to OVN-Kubernetes, ensure that the following IP address ranges are not in use: 100.64.0.0/16 , 169.254.169.0/29 , 100.88.0.0/16 , fd98::/64 , fd69::/125 , and fd97::/64 . OVN-Kubernetes uses these ranges internally. Do not include any of these ranges in any other CIDR definitions in your cluster or infrastructure. The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN network plugins. Primary network interface The OpenShift SDN plugin allows application of the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) to the primary interface on a node. The OVN-Kubernetes network plugin does not have this capability. If you have an NNCP applied to the primary interface, you must delete the NNCP before migrating to the OVN-Kubernetes network plugin. Deleting the NNCP does not remove the configuration from the primary interface, but with OVN-Kubernetes, the Kubernetes NMState cannot manage this configuration. Instead, the configure-ovs.sh shell script manages the primary interface and the configuration attached to this interface. Namespace isolation OVN-Kubernetes supports only the network policy isolation mode. Important For a cluster using OpenShift SDN that is configured in either the multitenant or subnet isolation mode, you can still migrate to the OVN-Kubernetes network plugin. Note that after the migration operation, multitenant isolation mode is dropped, so you must manually configure network policies to achieve the same level of project-level isolation for pods and services. Egress IP addresses OpenShift SDN supports two different Egress IP modes: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP addresses is assigned to a node. The migration process supports migrating Egress IP configurations that use the automatically assigned mode. The differences in configuring an egress IP address between OVN-Kubernetes and OpenShift SDN is described in the following table: Table 26.5. Differences in egress IP address configuration OVN-Kubernetes OpenShift SDN Create an EgressIPs object Add an annotation on a Node object Patch a NetNamespace object Patch a HostSubnet object For more information on using egress IP addresses in OVN-Kubernetes, see "Configuring an egress IP address". Egress network policies The difference in configuring an egress network policy, also known as an egress firewall, between OVN-Kubernetes and OpenShift SDN is described in the following table: Table 26.6. Differences in egress network policy configuration OVN-Kubernetes OpenShift SDN Create an EgressFirewall object in a namespace Create an EgressNetworkPolicy object in a namespace Note Because the name of an EgressFirewall object can only be set to default , after the migration all migrated EgressNetworkPolicy objects are named default , regardless of what the name was under OpenShift SDN. If you subsequently rollback to OpenShift SDN, all EgressNetworkPolicy objects are named default as the prior name is lost. For more information on using an egress firewall in OVN-Kubernetes, see "Configuring an egress firewall for a project". Egress router pods OVN-Kubernetes supports egress router pods in redirect mode. OVN-Kubernetes does not support egress router pods in HTTP proxy mode or DNS proxy mode. When you deploy an egress router with the Cluster Network Operator, you cannot specify a node selector to control which node is used to host the egress router pod. Multicast The difference between enabling multicast traffic on OVN-Kubernetes and OpenShift SDN is described in the following table: Table 26.7. Differences in multicast configuration OVN-Kubernetes OpenShift SDN Add an annotation on a Namespace object Add an annotation on a NetNamespace object For more information on using multicast in OVN-Kubernetes, see "Enabling multicast for a project". Network policies OVN-Kubernetes fully supports the Kubernetes NetworkPolicy API in the networking.k8s.io/v1 API group. No changes are necessary in your network policies when migrating from OpenShift SDN. Additional resources Understanding update channels and releases Asynchronous errata updates 26.6.1.3. How the offline migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 26.8. Offline migration to OVN-Kubernetes from OpenShift SDN User-initiated steps Migration activity Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OVNKubernetes . Make sure the migration field is null before setting it to a value. Cluster Network Operator (CNO) Updates the status of the Network.config.openshift.io CR named cluster accordingly. Machine Config Operator (MCO) Rolls out an update to the systemd configuration necessary for OVN-Kubernetes; the MCO updates a single machine per pool at a time by default, causing the total time the migration takes to increase with the size of the cluster. Update the networkType field of the Network.config.openshift.io CR. CNO Performs the following actions: Destroys the OpenShift SDN control plane pods. Deploys the OVN-Kubernetes control plane pods. Updates the Multus daemon sets and config map objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OVN-Kubernetes cluster network. 26.6.1.4. Migrating to the OVN-Kubernetes network plugin by using the offline migration method As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster. Important While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. Prerequisites You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode. You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have a recent backup of the etcd database. You can manually reboot each node. You checked that your cluster is in a known good state without any errors. You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. You set all timeouts for webhooks to 3 seconds or removed the webhooks. Procedure To backup the configuration for the cluster network, enter the following command: USD oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml Verify that the OVN_SDN_MIGRATION_TIMEOUT environment variable is set and is equal to 0s by running the following command: #!/bin/bash if [ -n "USDOVN_SDN_MIGRATION_TIMEOUT" ] && [ "USDOVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout "USDco_timeout" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \ oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \ oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False"; done EOT Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}' . Delete the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps: Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command: USD oc get nncp Example output NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured Network Manager stores the connection profile for the bonded primary interface in the /etc/NetworkManager/system-connections system path. Remove the NNCP from your cluster: USD oc delete nncp <nncp_manifest_filename> To prepare all the nodes for the migration, set the migration field on the CNO configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }' Note This step does not deploy OVN-Kubernetes immediately. Instead, specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment. Check that the reboot is finished by running the following command: USD oc get mcp Check that all cluster Operators are available by running the following command: USD oc get co Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements: Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step: If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored. If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step. This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step. Note OpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page. Geneve (Generic Network Virtualization Encapsulation) overlay network port OVN-Kubernetes IPv4 internal subnet To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>" }}}}' where: mtu The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value. port The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081 . The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789 . ipv4_subnet An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16 . Example patch command to update mtu field USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. Display the pod log for the first machine config daemon pod shown in the output by enter the following command: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands: To specify the network provider without changing the cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' To specify a different cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "networkType": "OVNKubernetes" } }' where cidr is a CIDR block and prefix is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally. Important You cannot change the service network address block during the migration. Verify that the Multus daemon set rollout is complete before continuing with subsequent steps: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: Important The following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes. Cluster Operators will not work correctly before you reboot the nodes. With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Confirm that the migration succeeded: To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of status.networkType must be OVNKubernetes . USD oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command: USD oc get nodes To confirm that your pods are not in an error state, enter the following command: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command: USD oc get co The status of every cluster Operator must be the following: AVAILABLE="True" , PROGRESSING="False" , DEGRADED="False" . If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the CNO configuration object, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove custom configuration for the OpenShift SDN network provider, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }' To remove the OpenShift SDN network provider namespace, enter the following command: USD oc delete namespace openshift-sdn steps Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking". 26.6.2. Limited live migration to the OVN-Kubernetes network plugin overview The limited live migration method is the process in which the OpenShift SDN network plugin and its network configurations, connections, and associated resources, are migrated to the OVN-Kubernetes network plugin without service interruption. For OpenShift Container Platform 4.15, it is available for versions 4.15.31 and later. It is the preferred method for migrating from OpenShift SDN to OVN-Kubernetes. In the event that you cannot perform a limited live migration, you can use the offline migration method. Important Before you migrate your OpenShift Container Platform cluster to use the OVN-Kubernetes network plugin, update your cluster to the latest z-stream release so that all the latest bug fixes apply to your cluster. It is not available for hosted control plane deployment types. This migration method is valuable for deployment types that require constant service availability and offers the following benefits: Continuous service availability Minimized downtime Automatic node rebooting Seamless transition from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin Although a rollback procedure is provided, the limited live migration is intended to be a one-way process. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. The following sections provide more information about the limited live migration method. 26.6.2.1. Supported platforms when using the limited live migration method The following table provides information about the supported platforms for the limited live migration type. Table 26.9. Supported platforms for the limited live migration method Platform Limited Live Migration Bare-metal hardware [✓] Amazon Web Services (AWS) [✓] Google Cloud Platform (GCP) [✓] IBM Cloud(R) [✓] Microsoft Azure [✓] Red Hat OpenStack Platform (RHOSP) [✓] VMware vSphere [✓] Nutanix [✓] Note Each listed platform supports installing an OpenShift Container Platform cluster on installer-provisioned infrastructure and user-provisioned infrastructure. 26.6.2.2. Best practices for limited live migration to the OVN-Kubernetes network plugin For a list of best practices when migrating to the OVN-Kubernetes network plugin with the limited live migration method, see Limited Live Migration from OpenShift SDN to OVN-Kubernetes . 26.6.2.3. Considerations for limited live migration to the OVN-Kubernetes network plugin Before using the limited live migration method to the OVN-Kubernetes network plugin, cluster administrators should consider the following information: The limited live migration procedure is unsupported for clusters with OpenShift SDN multitenant mode enabled. Egress router pods block the limited live migration process. They must be removed before beginning the limited live migration process. During the migration, when the cluster is running with both OVN-Kubernetes and OpenShift SDN, multicast and egress IP addresses are temporarily disabled for both CNIs. Egress firewalls remains functional. The migration is intended to be a one-way process. However, for users that want to rollback to OpenShift-SDN, migration from OpenShift-SDN to OVN-Kubernetes must have succeeded. Users can follow the same procedure below to migrate to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin. The limited live migration is not supported on HyperShift clusters. OpenShift SDN does not support IPsec. After the migration, cluster administrators can enable IPsec. OpenShift SDN does not support IPv6. After the migration, cluster administrators can enable dual-stack. The OpenShift SDN plugin allows application of the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) to the primary interface on a node. The OVN-Kubernetes network plugin does not have this capability. The cluster MTU is the MTU value for pod interfaces. It is always less than your hardware MTU to account for the cluster network overlay overhead. The overhead is 100 bytes for OVN-Kubernetes and 50 bytes for OpenShift SDN. During the limited live migration, both OVN-Kubernetes and OpenShift SDN run in parallel. OVN-Kubernetes manages the cluster network of some nodes, while OpenShift SDN manages the cluster network of others. To ensure that cross-CNI traffic remains functional, the Cluster Network Operator updates the routable MTU to ensure that both CNIs share the same overlay MTU. As a result, after the migration has completed, the cluster MTU is 50 bytes less. Some parameters of OVN-Kubernetes cannot be changed after installation. The following parameters can be set only before starting the limited live migration: InternalTransitSwitchSubnet internalJoinSubnet OVN-Kubernetes reserves the 100.64.0.0/16 and 100.88.0.0/16 IP address ranges. These subnets cannot be overlapped with any other internal or external network. If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before starting the limited live migration. See "Patching OVN-Kubernetes address ranges" for more information. In most cases, the limited live migration is independent of the secondary interfaces of pods created by the Multus CNI plugin. However, if these secondary interfaces were set up on the default network interface controller (NIC) of the host, for example, using MACVLAN, IPVLAN, SR-IOV, or bridge interfaces with the default NIC as the control node, OVN-Kubernetes might encounter malfunctions. Users should remove such configurations before proceeding with the limited live migration. When there are multiple NICs inside of the host, and the default route is not on the interface that has the Kubernetes NodeIP, you must use the offline migration instead. All DaemonSet objects in the openshift-sdn namespace, which are not managed by the Cluster Network Operator (CNO), must be removed before initiating the limited live migration. These unmanaged daemon sets can cause the migration status to remain incomplete if not properly handled. If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. 26.6.2.4. How the limited live migration process works The following table summarizes the limited live migration process by segmenting between the user-initiated steps in the process and the actions that the migration script performs in response. Table 26.10. Limited live migration to OVNKubernetes from OpenShiftSDN User-initiated steps Migration activity Patch the cluster-level networking configuration by changing the networkType from OpenShiftSDN to OVNKubernetes . Cluster Network Operator (CNO) Sets migration related fields in the network.operator custom resource (CR) and waits for routable MTUs to be applied to all nodes. Patches the network.operator CR to set the migration mode to Live for OVN-Kubernetes and deploys the OpenShift SDN network plugin in migration mode. Deploys OVN-Kubernetes with hybrid overlay enabled, ensuring that no racing conditions occur. Waits for the OVN-Kubernetes deployment and updates the conditions in the status of the network.config CR. Triggers the Machine Config Operator (MCO) to apply the new machine config to each machine config pool, which includes node cordoning, draining, and rebooting. OVN-Kubernetes adds nodes to the appropriate zones and recreates pods using OVN-Kubernetes as the default CNI plugin. Removes migration-related fields from the network.operator CR and performs cleanup actions, such as deleting OpenShift SDN resources and redeploying OVN-Kubernetes in normal mode with the necessary configurations. Waits for the OVN-Kubernetes redeployment and updates the status conditions in the network.config CR to indicate migration completion. If your migration is blocked, see "Checking limited live migration metrics" for information on troubleshooting the issue. 26.6.2.5. Migrating to the OVN-Kubernetes network plugin by using the limited live migration method Migrating to the OVN-Kubernetes network plugin by using the limited live migration method is a multiple step process that requires users to check the behavior of egress IP resources, egress firewall resources, and multicast enabled namespaces. Administrators must also review any network policies in their deployment and remove egress router resources before initiating the limited live migration process. The following procedures should be used in succession. 26.6.2.5.1. Checking cluster resources before initiating the limited live migration Before migrating to OVN-Kubernetes by using the limited live migration, you should check for egress IP resources, egress firewall resources, and multicast-enabled namespaces on your OpenShift SDN deployment. You should also review any network policies in your deployment. If you find that your cluster has these resources before migration, you should check their behavior after migration to ensure that they are working as intended. The following procedure shows you how to check for egress IP resources, egress firewall resources, multicast-enabled namespaces, network policies, and an NNCP. No action is necessary after checking for these resources. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure As an OpenShift Container Platform cluster administrator, check for egress firewall resources. You can do this by using the oc CLI, or by using the OpenShift Container Platform web console. To check for egress firewall resource by using the oc CLI tool: To check for egress firewall resources, enter the following command: USD oc get egressnetworkpolicies.network.openshift.io -A Example output NAMESPACE NAME AGE <namespace> <example_egressfirewall> 5d You can check the intended behavior of an egress firewall resource by using the -o yaml flag. For example: USD oc get egressnetworkpolicy <example_egressfirewall> -n <namespace> -o yaml Example output apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <example_egress_policy> namespace: <namespace> spec: egress: - type: Allow to: cidrSelector: 0.0.0.0/0 - type: Deny to: cidrSelector: 10.0.0.0/8 To check for egress firewall resources by using the OpenShift Container Platform web console: On the OpenShift Container Platform web console, click Observe Metrics . In the Expression box, type sdn_controller_num_egress_firewalls and click Run queries . If you have egress firewall resources, they are returned in the Expression box. Check your cluster for egress IP resources. You can do this by using the oc CLI, or by using the OpenShift Container Platform web console. To check for egress IPs by using the oc CLI tool: To list namespaces with egress IP resources, enter the following command USD oc get netnamespace -A | awk 'USD3 != ""' Example output NAME NETID EGRESS IPS namespace1 14173093 ["10.0.158.173"] namespace2 14173020 ["10.0.158.173"] To check for egress IPs by using the OpenShift Container Platform web console: On the OpenShift Container Platform web console, click Observe Metrics . In the Expression box, type sdn_controller_num_egress_ips and click Run queries . If you have egress firewall resources, they are returned in the Expression box. Check your cluster for multicast enabled namespaces. You can do this by using the oc CLI, or by using the OpenShift Container Platform web console. To check for multicast enabled namespaces by using the oc CLI tool: To locate namespaces with multicast enabled, enter the following command: USD oc get netnamespace -o json | jq -r '.items[] | select(.metadata.annotations."netnamespace.network.openshift.io/multicast-enabled" == "true") | .metadata.name' Example output namespace1 namespace3 To check for multicast enabled namespaces by using the OpenShift Container Platform web console: On the OpenShift Container Platform web console, click Observe Metrics . In the Expression box, type sdn_controller_num_multicast_enabled_namespaces and click Run queries . If you have multicast enabled namespaces, they are returned in the Expression box. Check your cluster for any network policies. You can do this by using the oc CLI. To check for network policies by using the oc CLI tool, enter the following command: USD oc get networkpolicy -n <namespace> Example output NAME POD-SELECTOR AGE allow-multicast app=my-app 11m 26.6.2.5.2. Removing egress router pods before initiating the limited live migration Before initiating the limited live migration, you must check for, and remove, any egress router pods. If there is an egress router pod on the cluster when performing a limited live migration, the Network Operator blocks the migration and returns the following error: The cluster configuration is invalid (network type limited live migration is not supported for pods with `pod.network.openshift.io/assign-macvlan` annotation. Please remove all egress router pods). Use `oc edit network.config.openshift.io cluster` to fix. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To locate egress router pods on your cluster, enter the following command: USD oc get pods --all-namespaces -o json | jq '.items[] | select(.metadata.annotations."pod.network.openshift.io/assign-macvlan" == "true") | {name: .metadata.name, namespace: .metadata.namespace}' Example output { "name": "egress-multi", "namespace": "egress-router-project" } Alternatively, you can query metrics on the OpenShift Container Platform web console. On the OpenShift Container Platform web console, click Observe Metrics . In the Expression box, enter network_attachment_definition_instances{networks="egress-router"} . Then, click Add . To remove an egress router pod, enter the following command: USD oc delete pod <egress_pod_name> -n <egress_router_project> 26.6.2.5.3. Initiating the limited live migration process After you have checked the behavior of egress IP resources, egress firewall resources, and multicast enabled namespaces, and removed any egress router resources, you can initiate the limited live migration process. Prerequisites A cluster has been configured with the OpenShift SDN CNI network plugin in the network policy isolation mode. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have created a recent backup of the etcd database. The cluster is in a known good state without any errors. Before migration to OVN-Kubernetes, a security group rule must be in place to allow UDP packets on port 6081 for all nodes on all cloud platforms. If the 100.64.0.0/16 and 100.88.0.0/16 address ranges were previously in use by OpenShift-SDN, you have patched them. The first step of this procedure checks whether these address ranges are in use. If they are in use, see "Patching OVN-Kubernetes address ranges". You have checked for egress IP resources, egress firewall resources, and multicast enabled namespaces. You have removed any egress router pods before beginning the limited live migration. For more information about egress router pods, see "Deploying an egress router pod in redirect mode". You have reviewed the "Considerations for limited live migration to the OVN-Kubernetes network plugin" section of this document. Procedure To patch the cluster-level networking configuration and initiate the migration from OpenShift SDN to OVN-Kubernetes, enter the following command: USD oc patch Network.config.openshift.io cluster --type='merge' --patch '{"metadata":{"annotations":{"network.openshift.io/network-type-migration":""}},"spec":{"networkType":"OVNKubernetes"}}' After running this command, the migration process begins. During this process, the Machine Config Operator reboots the nodes in your cluster twice. The migration takes approximately twice as long as a cluster upgrade. Important This oc patch command checks for overlapping CIDRs in use by OpenShift SDN. If overlapping CIDRs are detected, you must patch them before the limited live migration process can start. For more information, see "Patching OVN-Kubernetes address ranges". Optional: To ensure that the migration process has completed, and to check the status of the network.config , you can enter the following commands: USD oc get network.config.openshift.io cluster -o jsonpath='{.status.networkType}' USD oc get network.config cluster -o=jsonpath='{.status.conditions}' | jq . You can check limited live migration metrics for troubleshooting issues. For more information, see "Checking limited live migration metrics". 26.6.2.5.4. Patching OVN-Kubernetes address ranges OVN-Kubernetes reserves the following IP address ranges: 100.64.0.0/16 . This IP address range is used for the internalJoinSubnet parameter of OVN-Kubernetes by default. 100.88.0.0/16 . This IP address range is used for the internalTransSwitchSubnet parameter of OVN-Kubernetes by default. If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before initiating the limited live migration. The following procedure can be used to patch CIDR ranges that are in use by OpenShift SDN if the migration was initially blocked. Note This is an optional procedure and must only be used if the migration was blocked after using the oc patch Network.config.openshift.io cluster --type='merge' --patch '{"metadata":{"annotations":{"network.openshift.io/network-type-migration":""}},"spec":{"networkType":"OVNKubernetes"}}' command "Initiating the limited live migration process". Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure If the 100.64.0.0/16 IP address range is already in use, enter the following command to patch it to a different range. The following example uses 100.63.0.0/16 . USD oc patch network.operator.openshift.io cluster --type='merge' -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipv4":{"internalJoinSubnet": "100.63.0.0/16"}}}}}' If the 100.88.0.0/16 IP address range is already in use, enter the following command to patch it to a different range. The following example uses 100.99.0.0/16 . USD oc patch network.operator.openshift.io cluster --type='merge' -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipv4":{"internalTransitSwitchSubnet": "100.99.0.0/16"}}}}}' After patching the 100.64.0.0/16 and 100.88.0.0/16 IP address ranges, you can initiate the limited live migration. 26.6.2.5.5. Checking cluster resources after initiating the limited live migration The following procedure shows you how to check for egress IP resources, egress firewall resources, multicast enabled namespaces, and network policies when your deploying is using OVN-Kubernetes. If you had these resources on OpenShift SDN, you should check them after migration to ensure that they are working properly. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have successfully migrated from OpenShift SDN to OVN-Kubernetes by using the limited live migration. Procedure As an OpenShift Container Platform cluster administrator, check for egress firewall resources. You can do this by using the oc CLI, or by using the OpenShift Container Platform web console. To check for egress firewall resource by using the oc CLI tool: To check for egress firewall resources, enter the following command: USD oc get egressfirewalls.k8s.ovn.org -A Example output NAMESPACE NAME AGE <namespace> <example_egressfirewall> 5d You can check the intended behavior of an egress firewall resource by using the -o yaml flag. For example: USD oc get egressfirewall <example_egressfirewall> -n <namespace> -o yaml Example output apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <example_egress_policy> namespace: <namespace> spec: egress: - type: Allow to: cidrSelector: 192.168.0.0/16 - type: Deny to: cidrSelector: 0.0.0.0/0 Ensure that the behavior of this resource is intended because it could have changed after migration. For more information about egress firewalls, see "Configuring an egress firewall for a project". To check for egress firewall resources by using the OpenShift Container Platform web console: On the OpenShift Container Platform web console, click Observe Metrics . In the Expression box, type ovnkube_controller_num_egress_firewall_rules and click Run queries . If you have egress firewall resources, they are returned in the Expression box. Check your cluster for egress IP resources. You can do this by using the oc CLI, or by using the OpenShift Container Platform web console. To check for egress IPs by using the oc CLI tool: To list the namespace with egress IP resources, enter the following command: USD oc get egressip Example output NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egress-sample 192.0.2.10 ip-10-0-42-79.us-east-2.compute.internal 192.0.2.10 egressip-sample-2 192.0.2.14 ip-10-0-42-79.us-east-2.compute.internal 192.0.2.14 To provide detailed information about an egress IP, enter the following command: USD oc get egressip <egressip_name> -o yaml Example output apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.ovn.org/v1","kind":"EgressIP","metadata":{"annotations":{},"name":"egressip-sample"},"spec":{"egressIPs":["192.0.2.12","192.0.2.13"],"namespaceSelector":{"matchLabels":{"name":"my-namespace"}}}} creationTimestamp: "2024-06-27T15:48:36Z" generation: 7 name: egressip-sample resourceVersion: "125511578" uid: b65833c8-781f-4cc9-bc96-d970259a7631 spec: egressIPs: - 192.0.2.12 - 192.0.2.13 namespaceSelector: matchLabels: name: my-namespace Repeat this for all egress IPs. Ensure that the behavior of each resource is intended because it could have changed after migration. For more information about EgressIPs, see "Configuring an EgressIP address". To check for egress IPs by using the OpenShift Container Platform web console: On the OpenShift Container Platform web console, click Observe Metrics . In the Expression box, type ovnkube_clustermanager_num_egress_ips and click Run queries . If you have egress firewall resources, they are returned in the Expression box. Check your cluster for multicast enabled namespaces. You can only do this by using the oc CLI. To locate namespaces with multicast enabled, enter the following command: USD oc get namespace -o json | jq -r '.items[] | select(.metadata.annotations."k8s.ovn.org/multicast-enabled" == "true") | .metadata.name' Example output namespace1 namespace3 To describe each multicast enabled namespace, enter the following command: USD oc describe namespace <namespace> Example output Name: my-namespace Labels: kubernetes.io/metadata.name=my-namespace pod-security.kubernetes.io/audit=restricted pod-security.kubernetes.io/audit-version=v1.24 pod-security.kubernetes.io/warn=restricted pod-security.kubernetes.io/warn-version=v1.24 Annotations: k8s.ovn.org/multicast-enabled: true openshift.io/sa.scc.mcs: s0:c25,c0 openshift.io/sa.scc.supplemental-groups: 1000600000/10000 openshift.io/sa.scc.uid-range: 1000600000/10000 Status: Active Ensure that multicast functionality is correctly configured and working as expected in each namespace. For more information, see "Enabling multicast for a project". Check your cluster's network policies. You can only do this by using the oc CLI. To obtain information about network policies within a namespace, enter the following command: USD oc get networkpolicy -n <namespace> Example output NAME POD-SELECTOR AGE allow-multicast app=my-app 11m To provide detailed information about the network policy, enter the following command: USD oc describe networkpolicy allow-multicast -n <namespace> Example output Name: allow-multicast Namespace: my-namespace Created on: 2024-07-24 14:55:03 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: app=my-app Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: IPBlock: CIDR: 224.0.0.0/4 Except: Allowing egress traffic: To Port: <any> (traffic allowed to all ports) To: IPBlock: CIDR: 224.0.0.0/4 Except: Policy Types: Ingress, Egress Ensure that the behavior of the network policy is as intended. Optimization for network policies differ between SDN and OVN-K, so users might need to adjust their policies to achieve optimal performance for different CNIs. For more information, see "About network policy". 26.6.2.6. Checking limited live migration metrics Metrics are available to monitor the progress of the limited live migration. Metrics can be viewed on the OpenShift Container Platform web console, or by using the oc CLI. Prerequisites You have initiated a limited live migration to OVN-Kubernetes. Procedure To view limited live migration metrics on the OpenShift Container Platform web console: Click Observe Metrics . In the Expression box, type openshift_network and click the openshift_network_operator_live_migration_condition option. To view metrics by using the oc CLI: Enter the following command to generate a token for the prometheus-k8s service account in the openshift-monitoring namespace: USD oc create token prometheus-k8s -n openshift-monitoring Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IlZiSUtwclcwbEJ2VW9We... Enter the following command to request information about the openshift_network_operator_live_migration_condition metric: USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: <eyJhbGciOiJSUzI1NiIsImtpZCI6IlZiSUtwclcwbEJ2VW9We...>" "https://<openshift_API_endpoint>" --data-urlencode "query=openshift_network_operator_live_migration_condition" | jq Example output "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "openshift_network_operator_live_migration_condition", "container": "network-operator", "endpoint": "metrics", "instance": "10.0.83.62:9104", "job": "metrics", "namespace": "openshift-network-operator", "pod": "network-operator-6c87754bc6-c8qld", "prometheus": "openshift-monitoring/k8s", "service": "metrics", "type": "NetworkTypeMigrationInProgress" }, "value": [ 1717653579.587, "1" ] }, ... The table in "Information about limited live migration metrics" shows you the available metrics and the label values populated from the openshift_network_operator_live_migration_condition expression. Use this information to monitor progress or to troubleshoot the migration. 26.6.2.6.1. Information about limited live migration metrics The following table shows you the available metrics and the label values populated from the openshift_network_operator_live_migration_condition expression. Use this information to monitor progress or to troubleshoot the migration. Table 26.11. Limited live migration metrics Metric Label values openshift_network_operator_live_migration_blocked: A Prometheus gauge vector metric. A metric that contains a constant 1 value labeled with the reason that the CNI limited live migration might not have started. This metric is available when the CNI limited live migration has started by annotating the Network custom resource. This metric is not published unless the limited live migration is blocked. The list of label values includes the following UnsupportedCNI : Unable to migrate to the unsupported target CNI. Valid CNI is OVNKubernetes when migrating from OpenShift SDN. UnsupportedHyperShiftCluster : Limited live migration is unsupported within an HCP cluster. UnsupportedSDNNetworkIsolationMode : OpenShift SDN is configured with an unsupported network isolation mode Multitenant . Migrate to a supported network isolation mode before performing limited live migration. UnsupportedMACVLANInterface : Remove the egress router or any pods which contain the pod annotation pod.network.openshift.io/assign-macvlan . Find the offending pod's namespace or pod name with the following command: oc get pods -Ao=jsonpath='{range .items[?(@.metadata.annotations.pod\.network\.openshift\.io/assign-macvlan=="")]}{@.metadata.namespace}{"\t"}{@.metadata.name}{"\n"}' . openshift_network_operator_live_migration_condition: A metric which represents the status of each condition type for the CNI limited live migration. The set of status condition types is defined for network.config to support observability of the CNI limited live migration. A 1 value represents condition status true . A 0 value represents false . -1 represents unknown. This metric is available when the CNI limited live migration has started by annotating the Network custom resource (CR). This metric is only available when the limited live migration has been triggered by adding the relevant annotation to the Network CR cluster, otherwise, it is not published. If the following condition types are not present within the Network CR cluster, the metric and their labels are cleared. The list of label values includes the following NetworkTypeMigrationInProgress NetworkTypeMigrationTargetCNIAvailable NetworkTypeMigrationTargetCNIInUse NetworkTypeMigrationOriginalCNIPurged NetworkTypeMigrationMTUReady 26.6.3. Additional resources Red Hat OpenShift Network Calculator Configuration parameters for the OVN-Kubernetes network plugin Backing up etcd About network policy Changing the cluster MTU MTU value selection Converting to IPv4/IPv6 dual-stack networking OVN-Kubernetes capabilities Configuring an egress IP address Configuring an egress firewall for a project OVN-Kubernetes egress firewall blocks process to deploy application as DeploymentConfig Enabling multicast for a project OpenShift SDN capabilities Configuring egress IPs for a project Configuring an egress firewall for a project Enabling multicast for a project Deploying an egress router pod in redirect mode Network [operator.openshift.io/v1 ] 26.7. Rolling back to the OpenShift SDN network provider As a cluster administrator, you can roll back to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin using either the offline migration method, or the limited live migration method. This can only be done after the migration to the OVN-Kubernetes network plugin has successfully completed. Note If you used the offline migration method to migrate to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin, you should use the offline migration rollback method. If you used the limited live migration method to migrate to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin, you should use the limited live migration rollback method. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 26.7.1. Using the offline migration method to roll back to the OpenShift SDN network plugin Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable. Important You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback. If a rollback to OpenShift SDN is required, the following table describes the process. Table 26.12. Performing a rollback to OpenShift SDN User-initiated steps Migration activity Suspend the MCO to ensure that it does not interrupt the migration. The MCO stops. Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OpenShiftSDN . Make sure the migration field is null before setting it to a value. CNO Updates the status of the Network.config.openshift.io CR named cluster accordingly. Update the networkType field. CNO Performs the following actions: Destroys the OVN-Kubernetes control plane pods. Deploys the OpenShift SDN control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OpenShift-SDN network. Enable the MCO after all nodes in the cluster reboot. MCO Rolls out an update to the systemd configuration necessary for OpenShift SDN; the MCO updates a single machine per pool at a time by default, so the total time the migration takes increases with the size of the cluster. Prerequisites The OpenShift CLI ( oc ) is installed. Access to the cluster as a user with the cluster-admin role is available. The cluster is installed on infrastructure configured with the OVN-Kubernetes network plugin. A recent backup of the etcd database is available. A manual reboot can be triggered for each node. The cluster is in a known good state, without any errors. Procedure Stop all of the machine configuration pools managed by the Machine Config Operator (MCO): Stop the master configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }' Stop the worker machine configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }' To prepare for the migration, set the migration field to null by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' Check that the migration status is empty for the Network.config.openshift.io object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation. USD oc get Network.config cluster -o jsonpath='{.status.migration}' Apply the patch to the Network.operator.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }' Important If you applied the patch to the Network.config.openshift.io object before the patch operation finalizes on the Network.operator.openshift.io object, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state. Confirm that the migration status of the network plugin for the Network.config.openshift.io cluster object is OpenShiftSDN by entering the following command in your CLI: USD oc get Network.config cluster -o jsonpath='{.status.migration.networkType}' Apply the patch to the Network.config.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }' Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements: Maximum transmission unit (MTU) VXLAN port To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":<mtu>, "vxlanPort":<port> }}}}' mtu The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value. port The UDP port for the VXLAN overlay network. If a value is not specified, the default is 4789 . The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is 6081 . Example patch command USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":1200 }}}}' Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out After the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands:: Start the master configuration pool: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }' Start the worker configuration pool: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }' As the MCO updates machines in each config pool, it reboots each node. By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command in your CLI: USD oc get machineconfig <config_name> -o yaml where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. Confirm that the migration succeeded: To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of status.networkType must be OpenShiftSDN . USD oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command in your CLI: USD oc get nodes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command in your CLI: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. To display the pod log for each machine config daemon pod shown in the output, enter the following command in your CLI: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To confirm that your pods are not in an error state, enter the following command in your CLI: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove the OVN-Kubernetes configuration, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }' To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI: USD oc delete namespace openshift-ovn-kubernetes 26.7.2. Using the limited live migration method to roll back to the OpenShift SDN network plugin As a cluster administrator, you can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the limited live migration method. During the migration with this method, nodes are automatically rebooted and service to the cluster is not interrupted. Important You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback. If a rollback to OpenShift SDN is required, the following table describes the process. Table 26.13. Performing a rollback to OpenShift SDN User-initiated steps Migration activity Patch the cluster-level networking configuration by changing the networkType from OVNKubernetes to OpenShiftSDN . Cluster Network Operator (CNO) Performs the following actions: Sets migration related fields in the network.operator custom resource (CR) and waits for routable MTUs to be applied to all nodes by the Machine Config Operator (MCO). Patches the network.operator CR to set the migration mode to Live for OpenShiftSDN and deploys the OpenShiftSDN network plugin in migration mode. Deploys OVN-Kubernetes with hybrid overlay enabled. Waits for both CNI plugins to be deployed and updates the conditions in the status of the network.config CR. Triggers the MCO to apply the new machine config to each machine config pool, which includes node cordoning, draining, and rebooting. Removes migration-related fields from the network.operator CR and performs cleanup actions, such as deleting OpenShift SDN resources and redeploying OVN-Kubernetes in normal mode with the necessary configurations. Waits for the OpenShiftSDN redeployment and updates the status conditions in the network.config CR to indicate migration completion. Prerequisites The OpenShift CLI ( oc ) is installed. Access to the cluster as a user with the cluster-admin role is available. The cluster is installed on infrastructure configured with the OVN-Kubernetes network plugin. A recent backup of the etcd database is available. A manual reboot can be triggered for each node. The cluster is in a known good state, without any errors. Procedure To initiate the rollback to OpenShift SDN, enter the following command: USD oc patch Network.config.openshift.io cluster --type='merge' --patch '{"metadata":{"annotations":{"network.openshift.io/network-type-migration":""}},"spec":{"networkType":"OpenShiftSDN"}}' To watch the progress of your migration, enter the following command: USD watch -n1 'oc get network.config/cluster -o json | jq ".status.conditions[]|\"\\(.type) \\(.status) \\(.reason) \\(.message)\"" -r | column --table --table-columns NAME,STATUS,REASON,MESSAGE --table-columns-limit 4; echo; oc get mcp -o wide; echo; oc get node -o "custom-columns=NAME:metadata.name,STATE:metadata.annotations.machineconfiguration\\.openshift\\.io/state,DESIRED:metadata.annotations.machineconfiguration\\.openshift\\.io/desiredConfig,CURRENT:metadata.annotations.machineconfiguration\\.openshift\\.io/currentConfig,REASON:metadata.annotations.machineconfiguration\\.openshift\\.io/reason"' The command prints the following information every second: The conditions on the status of the network.config.openshift.io/cluster object, reporting the progress of the migration. The status of different nodes with respect to the machine-config-operator resource, including whether they are upgrading or have been upgraded, as well as their current and desired configurations. Complete the following steps only if the migration succeeds and your cluster is in a good state: Remove the network.openshift.io/network-type-migration= annotation from the network.config custom resource by entering the following command: USD oc annotate network.config cluster network.openshift.io/network-type-migration- Remove the OVN-Kubernetes network provider namespace by entering the following command: USD oc delete namespace openshift-ovn-kubernetes 26.8. Converting to IPv4/IPv6 dual-stack networking As a cluster administrator, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. After converting to dual-stack, all newly created pods are dual-stack enabled. Note While using dual-stack networking, you cannot use IPv4-mapped IPv6 addresses, such as ::FFFF:198.51.100.1 , where IPv6 is required. A dual-stack network is supported on clusters provisioned on bare metal, IBM Power(R), IBM Z(R) infrastructure, single-node OpenShift, and VMware vSphere. 26.8.1. Converting to a dual-stack cluster network As a cluster administrator, you can convert your single-stack cluster network to a dual-stack cluster network. Note After converting to dual-stack networking only newly created pods are assigned IPv6 addresses. Any pods created before the conversion must be recreated to receive an IPv6 address. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Your cluster uses the OVN-Kubernetes network plugin. The cluster nodes have IPv6 addresses. You have configured an IPv6-enabled router based on your infrastructure. Procedure To specify IPv6 address blocks for the cluster and service networks, create a file containing the following YAML: - op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2 1 Specify an object with the cidr and hostPrefix fields. The host prefix must be 64 or greater. The IPv6 CIDR prefix must be large enough to accommodate the specified host prefix. 2 Specify an IPv6 CIDR with a prefix of 112 . Kubernetes uses only the lowest 16 bits. For a prefix of 112 , IP addresses are assigned from 112 to 128 bits. To patch the cluster network configuration, enter the following command: USD oc patch network.config.openshift.io cluster \ --type='json' --patch-file <file>.yaml where: file Specifies the name of the file you created in the step. Example output network.config.openshift.io/cluster patched Verification Complete the following step to verify that the cluster network recognizes the IPv6 address blocks that you specified in the procedure. Display the network configuration: USD oc describe network Example output Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112 26.8.2. Converting to a single-stack cluster network As a cluster administrator, you can convert your dual-stack cluster network to a single-stack cluster network. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Your cluster uses the OVN-Kubernetes network plugin. The cluster nodes have IPv6 addresses. You have enabled dual-stack networking. Procedure Edit the networks.config.openshift.io custom resource (CR) by running the following command: USD oc edit networks.config.openshift.io Remove the IPv6 specific configuration that you have added to the cidr and hostPrefix fields in the procedure. 26.9. Configuring OVN-Kubernetes internal IP address subnets As a cluster administrator, you can change the IP address ranges that the OVN-Kubernetes network plugin uses for the join and transit subnets. 26.9.1. Configuring the OVN-Kubernetes join subnet You can change the join subnet used by OVN-Kubernetes to avoid conflicting with any existing subnets already in use in your environment. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To change the OVN-Kubernetes join subnet, enter the following command: USD oc patch network.operator.openshift.io cluster --type='merge' \ -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig": {"ipv4":{"internalJoinSubnet": "<join_subnet>"}, "ipv6":{"internalJoinSubnet": "<join_subnet>"}}}}}' where: <join_subnet> Specifies an IP address subnet for internal use by OVN-Kubernetes. The subnet must be larger than the number of nodes in the cluster and it must be large enough to accommodate one IP address per node in the cluster. This subnet cannot overlap with any other subnets used by OpenShift Container Platform or on the host itself. The default value for IPv4 is 100.64.0.0/16 and the default value for IPv6 is fd98::/64 . Example output network.operator.openshift.io/cluster patched Verification To confirm that the configuration is active, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.defaultNetwork}" It can take up to 30 minutes for this change to take effect. Example output 26.9.2. Configuring the OVN-Kubernetes transit subnet You can change the transit subnet used by OVN-Kubernetes to avoid conflicting with any existing subnets already in use in your environment. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To change the OVN-Kubernetes transit subnet, enter the following command: USD oc patch network.operator.openshift.io cluster --type='merge' \ -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig": {"ipv4":{"internalTransitSwitchSubnet": "<transit_subnet>"}, "ipv6":{"internalTransitSwitchSubnet": "<transit_subnet>"}}}}}' where: <transit_subnet> Specifies an IP address subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. The default value for IPv4 is 100.88.0.0/16 and the default value for IPv6 is fd97::/64 . Example output network.operator.openshift.io/cluster patched Verification To confirm that the configuration is active, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.defaultNetwork}" It can take up to 30 minutes for this change to take effect. Example output 26.10. Logging for egress firewall and network policy rules As a cluster administrator, you can configure audit logging for your cluster and enable logging for one or more namespaces. OpenShift Container Platform produces audit logs for both egress firewalls and network policies. Note Audit logging is available for only the OVN-Kubernetes network plugin . 26.10.1. Audit logging The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) ACLs to manage egress firewalls and network policies. Audit logging exposes allow and deny ACL events. You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster. You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging section. In the k8s.ovn.org/acl-logging section, you must specify allow , deny , or both values to enable audit logging for a namespace. Note A network policy does not support setting the Pass action set as a rule. The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues. Example namespace annotation kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { "deny": "info", "allow": "info" } To view the default ACL logging configuration values, see the policyAuditConfig object in the cluster-network-03-config.yml file. If required, you can change the ACL logging configuration values for log file parameters in this file. The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0 . The following example shows key parameters and their values outputted in a log message: Example logging message that outputs parameters and their values <timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow> Where: <timestamp> states the time and date for the creation of a log message. <message_serial> lists the serial number for a log message. acl_log(ovn_pinctrl0) is a literal string that prints the location of the log message in the OVN-Kubernetes plugin. <severity> sets the severity level for a log message. If you enable audit logging that supports allow and deny tasks then two severity levels show in the log message output. <name> states the name of the ACL-logging implementation in the OVN Network Bridging Database ( nbdb ) that was created by the network policy. <verdict> can be either allow or drop . <direction> can be either to-lport or from-lport to indicate that the policy was applied to traffic going to or away from a pod. <flow> shows packet information in a format equivalent to the OpenFlow protocol. This parameter comprises Open vSwitch (OVS) fields. The following example shows OVS fields that the flow parameter uses to extract packet information from system memory: Example of OVS fields used by the flow parameter to extract packet information <proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags> Where: <proto> states the protocol. Valid values are tcp and udp . vlan_tci=0x0000 states the VLAN header as 0 because a VLAN ID is not set for internal pod network traffic. <src_mac> specifies the source for the Media Access Control (MAC) address. <source_mac> specifies the destination for the MAC address. <source_ip> lists the source IP address <target_ip> lists the target IP address. <tos_dscp> states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic. <tos_ecn> states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network. <ip_ttl> states the Time To Live (TTP) information for an packet. <fragment> specifies what type of IP fragments or IP non-fragments to match. <tcp_src_port> shows the source for the port for TCP and UDP protocols. <tcp_dst_port> lists the destination port for TCP and UDP protocols. <tcp_flags> supports numerous flags such as SYN , ACK , PSH and so on. If you need to set multiple values then each value is separated by a vertical bar ( | ). The UDP protocol does not support this parameter. Note For more information about the field descriptions, go to the OVS manual page for ovs-fields . Example ACL deny log entry for a network policy 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn The following table describes namespace annotation values: Table 26.14. Audit logging namespace annotation for k8s.ovn.org/acl-logging Field Description deny Blocks namespace access to any traffic that matches an ACL rule with the deny action. The field supports alert , warning , notice , info , or debug values. allow Permits namespace access to any traffic that matches an ACL rule with the allow action. The field supports alert , warning , notice , info , or debug values. pass A pass action applies to an admin network policy's ACL rule. A pass action allows either the network policy in the namespace or the baseline admin network policy rule to evaluate all incoming and outgoing traffic. A network policy does not support a pass action. Additional resources AdminNetworkPolicy 26.10.2. Audit configuration The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging: Audit logging configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 The following table describes the configuration fields for audit logging. Table 26.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . 26.10.3. Configuring egress firewall and network policy auditing for a cluster As a cluster administrator, you can customize audit logging for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To customize the audit logging configuration, enter the following command: USD oc edit network.operator.openshift.io/cluster Tip You can alternatively customize and apply the following YAML to configure audit logging: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 Verification To create a namespace with network policies complete the following steps: Create a namespace for verification: USD cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOF Example output namespace/verify-audit-logging created Create network policies for the namespace: USD cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF Example output networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created Create a pod for source traffic in the default namespace: USD cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF Create two pods in the verify-audit-logging namespace: USD for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF done Example output pod/client created pod/server created To generate traffic and produce network policy audit log entries, complete the following steps: Obtain the IP address for pod named server in the verify-audit-logging namespace: USD POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') Ping the IP address from the command from the pod named client in the default namespace and confirm that all packets are dropped: USD oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed: USD oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 26.10.4. Enabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can enable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To enable audit logging for a namespace, enter the following command: USD oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }' where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to enable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" } Example output namespace/verify-audit-logging annotated Verification Display the latest entries in the audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 26.10.5. Disabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can disable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable audit logging for a namespace, enter the following command: USD oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging- where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to disable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null Example output namespace/verify-audit-logging annotated 26.10.6. Additional resources About network policy Configuring an egress firewall for a project 26.11. Configuring IPsec encryption By enabling IPsec, you can encrypt both internal pod-to-pod cluster traffic between nodes and external traffic between pods and IPsec endpoints external to your cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode . IPsec is disabled by default. You can enable IPsec either during or after installing the cluster. For information about cluster installation, see OpenShift Container Platform installation overview . The following support limitations exist for IPsec on a OpenShift Container Platform cluster: You must disable IPsec before updating to OpenShift Container Platform 4.15. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. ( OCPBUGS-43323 ) On IBM Cloud(R), IPsec supports only NAT-T. Encapsulating Security Payload (ESP) is not supported on this platform. If your cluster uses hosted control planes for Red Hat OpenShift Container Platform, IPsec is not supported for IPsec encryption of either pod-to-pod or traffic to external hosts. Using ESP hardware offloading on any network interface is not supported if one or more of those interfaces is attached to Open vSwitch (OVS). Enabling IPsec for your cluster triggers the use of IPsec with interfaces attached to OVS. By default, OpenShift Container Platform disables ESP hardware offloading on any interfaces attached to OVS. If you enabled IPsec for network interfaces that are not attached to OVS, a cluster administrator must manually disable ESP hardware offloading on each interface that is not attached to OVS. The following list outlines key tasks in the IPsec documentation: Enable and disable IPsec after cluster installation. Configure IPsec encryption for traffic between the cluster and external hosts. Verify that IPsec encrypts traffic between pods on different nodes. 26.11.1. Modes of operation When using IPsec on your OpenShift Container Platform cluster, you can choose from the following operating modes: Table 26.16. IPsec modes of operation Mode Description Default Disabled No traffic is encrypted. This is the cluster default. Yes Full Pod-to-pod traffic is encrypted as described in "Types of network traffic flows encrypted by pod-to-pod IPsec". Traffic to external nodes may be encrypted after you complete the required configuration steps for IPsec. No External Traffic to external nodes may be encrypted after you complete the required configuration steps for IPsec. No 26.11.2. Prerequisites For IPsec support for encrypting traffic to external hosts, ensure that the following prerequisites are met: The OVN-Kubernetes network plugin must be configured in local gateway mode, where ovnKubernetesConfig.gatewayConfig.routingViaHost=true . The NMState Operator is installed. This Operator is required for specifying the IPsec configuration. For more information, see About the Kubernetes NMState Operator . Note The NMState Operator is supported on Google Cloud Platform (GCP) only for configuring IPsec. The Butane tool ( butane ) is installed. To install Butane, see Installing Butane . These prerequisites are required to add certificates into the host NSS database and to configure IPsec to communicate with external hosts. 26.11.3. Network connectivity requirements when IPsec is enabled You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. Table 26.17. Ports used for all-machine to all-machine communications Protocol Port Description UDP 500 IPsec IKE packets 4500 IPsec NAT-T packets ESP N/A IPsec Encapsulating Security Payload (ESP) 26.11.4. IPsec encryption for pod-to-pod traffic For IPsec encryption of pod-to-pod traffic, the following sections describe which specific pod-to-pod traffic is encrypted, what kind of encryption protocol is used, and how X.509 certificates are handled. These sections do not apply to IPsec encryption between the cluster and external hosts, which you must configure manually for your specific external network infrastructure. 26.11.4.1. Types of network traffic flows encrypted by pod-to-pod IPsec With IPsec enabled, only the following network traffic flows between pods are encrypted: Traffic between pods on different nodes on the cluster network Traffic from a pod on the host network to a pod on the cluster network The following traffic flows are not encrypted: Traffic between pods on the same node on the cluster network Traffic between pods on the host network Traffic from a pod on the cluster network to a pod on the host network The encrypted and unencrypted flows are illustrated in the following diagram: 26.11.4.2. Encryption protocol and IPsec mode The encrypt cipher used is AES-GCM-16-256 . The integrity check value (ICV) is 16 bytes. The key length is 256 bits. The IPsec mode used is Transport mode , a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OpenShift Container Platform does not currently use or support IPsec Tunnel mode for pod-to-pod communication. 26.11.4.3. Security certificate generation and rotation The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO. The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse. 26.11.5. IPsec encryption for external traffic OpenShift Container Platform supports IPsec encryption for traffic to external hosts with TLS certificates that you must supply. 26.11.5.1. Supported platforms This feature is supported on the following platforms: Bare metal Google Cloud Platform (GCP) Red Hat OpenStack Platform (RHOSP) VMware vSphere Important If you have Red Hat Enterprise Linux (RHEL) worker nodes, these do not support IPsec encryption for external traffic. If your cluster uses hosted control planes for Red Hat OpenShift Container Platform, configuring IPsec for encrypting traffic to external hosts is not supported. 26.11.5.2. Limitations Ensure that the following prohibitions are observed: IPv6 configuration is not currently supported by the NMState Operator when configuring IPsec for external traffic. Certificate common names (CN) in the provided certificate bundle must not begin with the ovs_ prefix, because this naming can conflict with pod-to-pod IPsec CN names in the Network Security Services (NSS) database of each node. 26.11.6. Enabling IPsec encryption As a cluster administrator, you can enable pod-to-pod IPsec encryption and IPsec encryption between the cluster and external IPsec endpoints. You can configure IPsec in either of the following modes: Full : Encryption for pod-to-pod and external traffic External : Encryption for external traffic If you need to configure encryption for external traffic in addition to pod-to-pod traffic, you must also complete the "Configuring IPsec encryption for external traffic" procedure. Prerequisites Install the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You have reduced the size of your cluster MTU by 46 bytes to allow for the overhead of the IPsec ESP header. Procedure To enable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "ipsecConfig":{ "mode":<mode> }}}}}' where: mode Specify External to encrypt only traffic to external hosts or specify Full to encrypt pod to pod traffic and optionally traffic to external hosts. By default, IPsec is disabled. Optional: If you need to encrypt traffic to external hosts, complete the "Configuring IPsec encryption for external traffic" procedure. Verification To find the names of the OVN-Kubernetes data plane pods, enter the following command: USD oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node Example output ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34m Verify that IPsec is enabled on your cluster by running the following command: Note As a cluster administrator, you can verify that IPsec is enabled between pods on your cluster when IPsec is configured in Full mode. This step does not verify whether IPsec is working between your cluster and external hosts. USD oc -n openshift-ovn-kubernetes rsh ovnkube-node-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec where: <XXXXX> Specifies the random sequence of letters for a pod from the step. Example output true 26.11.7. Configuring IPsec encryption for external traffic As a cluster administrator, to encrypt external traffic with IPsec you must configure IPsec for your network infrastructure, including providing PKCS#12 certificates. Because this procedure uses Butane to create machine configs, you must have the butane command installed. Note After you apply the machine config, the Machine Config Operator reboots affected nodes in your cluster to rollout the new machine config. Prerequisites Install the OpenShift CLI ( oc ). You have installed the butane utility on your local computer. You have installed the NMState Operator on the cluster. You are logged in to the cluster as a user with cluster-admin privileges. You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. You enabled IPsec in either Full or External mode on your cluster. The OVN-Kubernetes network plugin must be configured in local gateway mode, where ovnKubernetesConfig.gatewayConfig.routingViaHost=true . Procedure Create an IPsec configuration with an NMState Operator node network configuration policy. For more information, see Libreswan as an IPsec VPN implementation . To identify the IP address of the cluster node that is the IPsec endpoint, enter the following command: Create a file named ipsec-config.yaml that contains a node network configuration policy for the NMState Operator, such as in the following examples. For an overview about NodeNetworkConfigurationPolicy objects, see The Kubernetes NMState project . Example NMState IPsec transport configuration apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: "<hostname>" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftrsasigkey: '%cert' leftcert: left_server leftmodecfgclient: false right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: transport 1 Specifies the host name to apply the policy to. This host serves as the left side host in the IPsec configuration. 2 Specifies the name of the interface to create on the host. 3 Specifies the host name of the cluster node that terminates the IPsec tunnel on the cluster side. The name should match SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 4 Specifies the external host name, such as host.example.com . The name should match the SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 5 Specifies the IP address of the external host, such as 10.1.2.3/32 . Example NMState IPsec tunnel configuration apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: "<hostname>" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftmodecfgclient: false leftrsasigkey: '%cert' leftcert: left_server right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: tunnel 1 Specifies the host name to apply the policy to. This host serves as the left side host in the IPsec configuration. 2 Specifies the name of the interface to create on the host. 3 Specifies the host name of the cluster node that terminates the IPsec tunnel on the cluster side. The name should match SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 4 Specifies the external host name, such as host.example.com . The name should match the SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 5 Specifies the IP address of the external host, such as 10.1.2.3/32 . To configure the IPsec interface, enter the following command: USD oc create -f ipsec-config.yaml Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in subsequent steps. left_server.p12 : The certificate bundle for the IPsec endpoints ca.pem : The certificate authority that you signed your certificates with Create a machine config to add your certificates to the cluster: To create Butane config files for the control plane and worker nodes, enter the following command: USD for role in master worker; do cat >> "99-ipsec-USD{role}-endpoint-config.bu" <<-EOF variant: openshift version: 4.15.0 metadata: name: 99-USD{role}-import-certs labels: machineconfiguration.openshift.io/role: USDrole systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target storage: files: - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo "importing cert to NSS" certutil -A -n "CA" -t "CT,C,C" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W "" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n "left_server" -t "u,u,u" -d /var/lib/ipsec/nss/ EOF done To transform the Butane files that you created in the step into machine configs, enter the following command: USD for role in master worker; do butane -d . 99-ipsec-USD{role}-endpoint-config.bu -o ./99-ipsec-USDrole-endpoint-config.yaml done To apply the machine configs to your cluster, enter the following command: USD for role in master worker; do oc apply -f 99-ipsec-USD{role}-endpoint-config.yaml done Important As the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before external IPsec connectivity is available. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. To confirm that IPsec machine configs rolled out successfully, enter the following commands: Confirm that the IPsec machine configs were created: USD oc get mc | grep ipsec Example output 80-ipsec-master-extensions 3.2.0 6d15h 80-ipsec-worker-extensions 3.2.0 6d15h Confirm that the that the IPsec extension are applied to control plane nodes: USD oc get mcp master -o yaml | grep 80-ipsec-master-extensions -c Expected output 2 Confirm that the that the IPsec extension are applied to worker nodes: USD oc get mcp worker -o yaml | grep 80-ipsec-worker-extensions -c Expected output 2 Additional resources For more information about the nmstate IPsec API, see IPsec Encryption 26.11.8. Disabling IPsec encryption for an external IPsec endpoint As a cluster administrator, you can remove an existing IPsec tunnel to an external host. Prerequisites Install the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You enabled IPsec in either Full or External mode on your cluster. Procedure Create a file named remove-ipsec-tunnel.yaml with the following YAML: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: <name> spec: nodeSelector: kubernetes.io/hostname: <node_name> desiredState: interfaces: - name: <tunnel_name> type: ipsec state: absent where: name Specifies a name for the node network configuration policy. node_name Specifies the name of the node where the IPsec tunnel that you want to remove exists. tunnel_name Specifies the interface name for the existing IPsec tunnel. To remove the IPsec tunnel, enter the following command: USD oc apply -f remove-ipsec-tunnel.yaml 26.11.9. Disabling IPsec encryption As a cluster administrator, you can disable IPsec encryption. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "ipsecConfig":{ "mode":"Disabled" }}}}}' Optional: You can increase the size of your cluster MTU by 46 bytes because there is no longer any overhead from the IPsec ESP header in IP packets. 26.11.10. Additional resources Configuring a VPN with IPsec in Red Hat Enterprise Linux (RHEL) 9 Installing Butane About the OVN-Kubernetes Container Network Interface (CNI) network plugin Changing the MTU for the cluster network Network [operator.openshift.io/v1] API 26.12. Configure an external gateway on the default network As a cluster administrator, you can configure an external gateway on the default network. This feature offers the following benefits: Granular control over egress traffic on a per-namespace basis Flexible configuration of static and dynamic external gateway IP addresses Support for both IPv4 and IPv6 address families 26.12.1. Prerequisites Your cluster uses the OVN-Kubernetes network plugin. Your infrastructure is configured to route traffic from the secondary external gateway. 26.12.2. How OpenShift Container Platform determines the external gateway IP address You configure a secondary external gateway with the AdminPolicyBasedExternalRoute custom resource (CR) from the k8s.ovn.org API group. The CR supports static and dynamic approaches to specifying an external gateway's IP address. Each namespace that a AdminPolicyBasedExternalRoute CR targets cannot be selected by any other AdminPolicyBasedExternalRoute CR. A namespace cannot have concurrent secondary external gateways. Changes to policies are isolated in the controller. If a policy fails to apply, changes to other policies do not trigger a retry of other policies. Policies are only re-evaluated, applying any differences that might have occurred by the change, when updates to the policy itself or related objects to the policy such as target namespaces, pod gateways, or namespaces hosting them from dynamic hops are made. Static assignment You specify an IP address directly. Dynamic assignment You specify an IP address indirectly, with namespace and pod selectors, and an optional network attachment definition. If the name of a network attachment definition is provided, the external gateway IP address of the network attachment is used. If the name of a network attachment definition is not provided, the external gateway IP address for the pod itself is used. However, this approach works only if the pod is configured with hostNetwork set to true . 26.12.3. AdminPolicyBasedExternalRoute object configuration You can define an AdminPolicyBasedExternalRoute object, which is cluster scoped, with the following properties. A namespace can be selected by only one AdminPolicyBasedExternalRoute CR at a time. Table 26.18. AdminPolicyBasedExternalRoute object Field Type Description metadata.name string Specifies the name of the AdminPolicyBasedExternalRoute object. spec.from string Specifies a namespace selector that the routing polices apply to. Only namespaceSelector is supported for external traffic. For example: from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 A namespace can only be targeted by one AdminPolicyBasedExternalRoute CR. If a namespace is selected by more than one AdminPolicyBasedExternalRoute CR, a failed error status occurs on the second and subsequent CRs that target the same namespace. To apply updates, you must change the policy itself or related objects to the policy such as target namespaces, pod gateways, or namespaces hosting them from dynamic hops in order for the policy to be re-evaluated and your changes to be applied. spec.nextHops object Specifies the destinations where the packets are forwarded to. Must be either or both of static and dynamic . You must have at least one hop defined. Table 26.19. nextHops object Field Type Description static array Specifies an array of static IP addresses. dynamic array Specifies an array of pod selectors corresponding to pods configured with a network attachment definition to use as the external gateway target. Table 26.20. nextHops.static object Field Type Description ip string Specifies either an IPv4 or IPv6 address of the destination hop. bfdEnabled boolean Optional: Specifies whether Bi-Directional Forwarding Detection (BFD) is supported by the network. The default value is false . Table 26.21. nextHops.dynamic object Field Type Description podSelector string Specifies a [set-based]( https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement ) label selector to filter the pods in the namespace that match this network configuration. namespaceSelector string Specifies a set-based selector to filter the namespaces that the podSelector applies to. You must specify a value for this field. bfdEnabled boolean Optional: Specifies whether Bi-Directional Forwarding Detection (BFD) is supported by the network. The default value is false . networkAttachmentName string Optional: Specifies the name of a network attachment definition. The name must match the list of logical networks associated with the pod. If this field is not specified, the host network of the pod is used. However, the pod must be configure as a host network pod to use the host network. 26.12.3.1. Example secondary external gateway configurations In the following example, the AdminPolicyBasedExternalRoute object configures two static IP addresses as external gateways for pods in namespaces with the kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 label. apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: default-route-policy spec: from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 nextHops: static: - ip: "172.18.0.8" - ip: "172.18.0.9" In the following example, the AdminPolicyBasedExternalRoute object configures a dynamic external gateway. The IP addresses used for the external gateway are derived from the additional network attachments associated with each of the selected pods. apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: shadow-traffic-policy spec: from: namespaceSelector: matchLabels: externalTraffic: "" nextHops: dynamic: - podSelector: matchLabels: gatewayPod: "" namespaceSelector: matchLabels: shadowTraffic: "" networkAttachmentName: shadow-gateway - podSelector: matchLabels: gigabyteGW: "" namespaceSelector: matchLabels: gatewayNamespace: "" networkAttachmentName: gateway In the following example, the AdminPolicyBasedExternalRoute object configures both static and dynamic external gateways. apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: multi-hop-policy spec: from: namespaceSelector: matchLabels: trafficType: "egress" nextHops: static: - ip: "172.18.0.8" - ip: "172.18.0.9" dynamic: - podSelector: matchLabels: gatewayPod: "" namespaceSelector: matchLabels: egressTraffic: "" networkAttachmentName: gigabyte 26.12.4. Configure a secondary external gateway You can configure an external gateway on the default network for a namespace in your cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create a YAML file that contains an AdminPolicyBasedExternalRoute object. To create an admin policy based external route, enter the following command: USD oc create -f <file>.yaml where: <file> Specifies the name of the YAML file that you created in the step. Example output adminpolicybasedexternalroute.k8s.ovn.org/default-route-policy created To confirm that the admin policy based external route was created, enter the following command: USD oc describe apbexternalroute <name> | tail -n 6 where: <name> Specifies the name of the AdminPolicyBasedExternalRoute object. Example output Status: Last Transition Time: 2023-04-24T15:09:01Z Messages: Configured external gateway IPs: 172.18.0.8 Status: Success Events: <none> 26.12.5. Additional resources For more information about additional network attachments, see Understanding multiple networks 26.13. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 26.13.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressFirewall custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address A port number A protocol that is one of the following protocols: TCP, UDP, and SCTP Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 26.13.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressFirewall object. A maximum of one EgressFirewall object with a maximum of 8,000 rules can be defined per project. If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 26.13.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 26.13.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressFirewall objects is only recommended for domains with infrequent IP address changes. Note Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. 26.13.2. EgressFirewall custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressFirewall CR object: EgressFirewall object apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2 ... 1 The name for the object must be default . 2 A collection of one or more egress network policy rules as described in the following section. 26.13.2.1. EgressFirewall rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6 ... 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule that specifies the cidrSelector field or the dnsName field. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A DNS domain name. 5 Labels are key/value pairs that the user defines. Labels are attached to objects, such as pods. The nodeSelector allows for one or more node labels to be selected and attached to pods. 6 Optional: A stanza describing a collection of network ports and protocols for the rule. Ports stanza ports: - port: <port> 1 protocol: <protocol> 2 1 A network port, such as 80 or 443 . If you specify a value for this field, you must also specify a value for protocol . 2 A network protocol. The value must be either TCP , UDP , or SCTP . 26.13.2.2. Example EgressFirewall CR objects The following example defines several egress firewall policy rules: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. The following example defines a policy rule that denies traffic to the host at the 172.16.1.1/32 IP address, if the traffic is using either the TCP protocol and destination port 80 or any protocol and destination port 443 . apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443 26.13.2.3. Example nodeSelector for EgressFirewall As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using nodeSelector . Labels can be applied to one or more nodes. The following is an example with the region=east label: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow Tip Instead of adding manual rules per node IP address, use node selectors to create a label that allows pods behind an egress firewall to access host network pods. 26.13.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressFirewall object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressFirewall object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressfirewall.k8s.ovn.org/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 26.14. Viewing an egress firewall for a project As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 26.14.1. Viewing an EgressFirewall object You can view an EgressFirewall object in your cluster. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressFirewall objects defined in your cluster, enter the following command: USD oc get egressfirewall --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressfirewall <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 26.15. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 26.15.1. Editing an EgressFirewall object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Optional: If you did not save a copy of the EgressFirewall object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressFirewall object. Replace <filename> with the name of the file containing the updated EgressFirewall object. USD oc replace -f <filename>.yaml 26.16. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 26.16.1. Removing an EgressFirewall object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Enter the following command to delete the EgressFirewall object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressfirewall <name> 26.17. Configuring an egress IP address As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace. Important In an installer-provisioned infrastructure cluster, do not assign egress IP addresses to the infrastructure node that already hosts the ingress VIP. For more information, see the Red Hat Knowledgebase solution POD from the egress IP enabled namespace cannot access OCP route in an IPI cluster when the egress IP is assigned to the infra node that already hosts the ingress VIP . 26.17.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 26.17.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) Yes Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes IBM Z(R) and IBM(R) LinuxONE Yes IBM Z(R) and IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM Yes IBM Power(R) Yes Nutanix Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ). 26.17.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.15. Note The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address> . Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port. Note When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 26.17.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 26.17.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 26.17.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 26.17.1.3. Considerations for using an egress IP on additional network interfaces In OpenShift Container Platform, egress IPs provide administrators a way to control network traffic. Egress IPs can be used with the br-ex , or primary, network interface, which is a Linux bridge interface associated with Open vSwitch, or they can be used with additional network interfaces. You can inspect your network interface type by running the following command: USD ip -details link show The primary network interface is assigned a node IP address which also contains a subnet mask. Information for this node IP address can be retrieved from the Kubernetes node object for each node within your cluster by inspecting the k8s.ovn.org/node-primary-ifaddr annotation. In an IPv4 cluster, this annotation is similar to the following example: "k8s.ovn.org/node-primary-ifaddr: {"ipv4":"192.168.111.23/24"}" . If the egress IP is not within the subnet of the primary network interface subnet, you can use an egress IP on another Linux network interface that is not of the primary network interface type. By doing so, OpenShift Container Platform administrators are provided with a greater level of control over networking aspects such as routing, addressing, segmentation, and security policies. This feature provides users with the option to route workload traffic over specific network interfaces for purposes such as traffic segmentation or meeting specialized requirements. If the egress IP is not within the subnet of the primary network interface, then the selection of another network interface for egress traffic might occur if they are present on a node. You can determine which other network interfaces might support egress IPs by inspecting the k8s.ovn.org/host-cidrs Kubernetes node annotation. This annotation contains the addresses and subnet mask found for the primary network interface. It also contains additional network interface addresses and subnet mask information. These addresses and subnet masks are assigned to network interfaces that use the longest prefix match routing mechanism to determine which network interface supports the egress IP. Note OVN-Kubernetes provides a mechanism to control and direct outbound network traffic from specific namespaces and pods. This ensures that it exits the cluster through a particular network interface and with a specific egress IP address. Requirements for assigning an egress IP to a network interface that is not the primary network interface For users who want an egress IP and traffic to be routed over a particular interface that is not the primary network interface, the following conditions must be met: OpenShift Container Platform is installed on a bare metal cluster. This feature is disabled within cloud or hypervisor environments. Your OpenShift Container Platform pods are not configured as host-networked. If a network interface is removed or if the IP address and subnet mask which allows the egress IP to be hosted on the interface is removed, then the egress IP is reconfigured. Consequently, it could be assigned to another node and interface. IP forwarding must be enabled for the network interface. To enable IP forwarding, you can use the oc edit network.operator command and edit the object like the following example: # ... spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global # ... 26.17.1.4. Assignment of egress IPs to pods To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied: At least one node in your cluster must have the k8s.ovn.org/egress-assignable: "" label. An EgressIP object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace. Important If you create EgressIP objects prior to labeling any nodes in your cluster for egress IP assignment, OpenShift Container Platform might assign every egress IP address to the first node with the k8s.ovn.org/egress-assignable: "" label. To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any EgressIP objects. 26.17.1.5. Assignment of egress IPs to nodes When creating an EgressIP object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: "" label: An egress IP address is never assigned to more than one node at a time. An egress IP address is equally balanced between available nodes that can host the egress IP address. If the spec.EgressIPs array in an EgressIP object specifies more than one IP address, the following conditions apply: No node will ever host more than one of the specified IP addresses. Traffic is balanced roughly equally between the specified IP addresses for a given namespace. If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions. When a pod matches the selector for multiple EgressIP objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP objects is assigned as the egress IP address for the pod. Additionally, if an EgressIP object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an EgressIP object with two egress IP addresses, 10.10.20.1 and 10.10.20.2 , either might be used for each TCP connection or UDP conversation. 26.17.1.6. Architectural diagram of an egress IP address configuration The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18 CIDR block on the host network. Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: "" and thus available for the assignment of egress IP addresses. The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP object, the source IP address is either 192.168.126.10 or 192.168.126.102 . The traffic is balanced roughly equally between these two nodes. The following resources from the diagram are illustrated in detail: Namespace objects The namespaces are defined in the following manifest: Namespace objects apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod EgressIP object The following EgressIP object describes a configuration that selects all pods in any namespace with the env label set to prod . The egress IP addresses for the selected pods are 192.168.126.10 and 192.168.126.102 . EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102 For the configuration in the example, OpenShift Container Platform assigns both egress IP addresses to the available nodes. The status field reflects whether and where the egress IP addresses are assigned. 26.17.2. EgressIP object The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide; it is not created in a namespace. apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 ... podSelector: 4 ... 1 The name for the EgressIPs object. 2 An array of one or more IP addresses. 3 One or more selectors for the namespaces to associate the egress IP addresses with. 4 Optional: One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace. The following YAML describes the stanza for the namespace selector: Namespace selector stanza namespaceSelector: 1 matchLabels: <label_name>: <label_value> 1 One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected. The following YAML describes the optional stanza for the pod selector: Pod selector stanza podSelector: 1 matchLabels: <label_name>: <label_value> 1 Optional: One or more matching rules for pods in the namespaces that match the specified namespaceSelector rules. If specified, only pods that match are selected. Others pods in the namespace are not selected. In the following example, the EgressIP object associates the 192.168.126.11 and 192.168.126.102 egress IP addresses with pods that have the app label set to web and are in the namespaces that have the env label set to prod : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod In the following example, the EgressIP object associates the 192.168.127.30 and 192.168.127.40 egress IP addresses with any pods that do not have the environment label set to development : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development 26.17.3. The egressIPConfig object As a feature of egress IP, the reachabilityTotalTimeoutSeconds parameter configures the EgressIP node reachability check total timeout in seconds. If the EgressIP node cannot be reached within this timeout, the node is declared down. You can set a value for the reachabilityTotalTimeoutSeconds in the configuration file for the egressIPConfig object. Setting a large value might cause the EgressIP implementation to react slowly to node changes. The implementation reacts slowly for EgressIP nodes that have an issue and are unreachable. If you omit the reachabilityTotalTimeoutSeconds parameter from the egressIPConfig object, the platform chooses a reasonable default value, which is subject to change over time. The current default is 1 second. A value of 0 disables the reachability check for the EgressIP node. The following egressIPConfig object describes changing the reachabilityTotalTimeoutSeconds from the default 1 second probes to 5 second probes: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081 1 The egressIPConfig holds the configurations for the options of the EgressIP object. By changing these configurations, you can extend the EgressIP object. 2 The value for reachabilityTotalTimeoutSeconds accepts integer values from 0 to 60 . A value of 0 disables the reachability check of the egressIP node. Setting a value from 1 to 60 corresponds to the timeout in seconds for a probe to send the reachability check to the node. 26.17.4. Labeling a node to host egress IP addresses You can apply the k8s.ovn.org/egress-assignable="" label to a node in your cluster so that OpenShift Container Platform can assign one or more egress IP addresses to the node. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a cluster administrator. Procedure To label a node so that it can host one or more egress IP addresses, enter the following command: USD oc label nodes <node_name> k8s.ovn.org/egress-assignable="" 1 1 The name of the node to label. Tip You can alternatively apply the following YAML to add the label to a node: apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: "" name: <node_name> 26.17.5. steps Assigning egress IPs 26.17.6. Additional resources LabelSelector meta/v1 LabelSelectorRequirement meta/v1 26.18. Assigning an egress IP address As a cluster administrator, you can assign an egress IP address for traffic leaving the cluster from a namespace or from specific pods in a namespace. 26.18.1. Assigning an egress IP address to a namespace You can assign one or more egress IP addresses to a namespace or to specific pods in a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a cluster administrator. Configure at least one node to host an egress IP address. Procedure Create an EgressIP object: Create a <egressips_name>.yaml file where <egressips_name> is the name of the object. In the file that you created, define an EgressIP object, as in the following example: apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa To create the object, enter the following command. USD oc apply -f <egressips_name>.yaml 1 1 Replace <egressips_name> with the name of the object. Example output egressips.k8s.ovn.org/<egressips_name> created Optional: Store the <egressips_name>.yaml file so that you can make changes later. Add labels to the namespace that requires egress IP addresses. To add a label to the namespace of an EgressIP object defined in step 1, run the following command: USD oc label ns <namespace> env=qa 1 1 Replace <namespace> with the namespace that requires egress IP addresses. Verification To show all egress IPs that are in use in your cluster, enter the following command: USD oc get egressip -o yaml Note The command oc get egressip only returns one egress IP address regardless of how many are configured. This is not a bug and is a limitation of Kubernetes. As a workaround, you can pass in the -o yaml or -o json flags to return all egress IPs addresses in use. Example output # ... spec: egressIPs: - 192.168.127.10 - 192.168.127.11 # ... 26.18.2. Additional resources Configuring egress IP addresses 26.19. Configuring an egress service As a cluster administrator, you can configure egress traffic for pods behind a load balancer service by using an egress service. Important Egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use the EgressService custom resource (CR) to manage egress traffic in the following ways: Assign a load balancer service IP address as the source IP address for egress traffic for pods behind the load balancer service. Assigning the load balancer IP address as the source IP address in this context is useful to present a single point of egress and ingress. For example, in some scenarios, an external system communicating with an application behind a load balancer service can expect the source and destination IP address for the application to be the same. Note When you assign the load balancer service IP address to egress traffic for pods behind the service, OVN-Kubernetes restricts the ingress and egress point to a single node. This limits the load balancing of traffic that MetalLB typically provides. Assign the egress traffic for pods behind a load balancer to a different network than the default node network. This is useful to assign the egress traffic for applications behind a load balancer to a different network than the default network. Typically, the different network is implemented by using a VRF instance associated with a network interface. 26.19.1. Egress service custom resource Define the configuration for an egress service in an EgressService custom resource. The following YAML describes the fields for the configuration of an egress service: apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: <egress_service_name> 1 namespace: <namespace> 2 spec: sourceIPBy: <egress_traffic_ip> 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/<role>: "" network: <egress_traffic_network> 5 1 Specify the name for the egress service. The name of the EgressService resource must match the name of the load-balancer service that you want to modify. 2 Specify the namespace for the egress service. The namespace for the EgressService must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. 3 Specify the source IP address of egress traffic for pods behind a service. Valid values are LoadBalancerIP or Network . Use the LoadBalancerIP value to assign the LoadBalancer service ingress IP address as the source IP address for egress traffic. Specify Network to assign the network interface IP address as the source IP address for egress traffic. 4 Optional: If you use the LoadBalancerIP value for the sourceIPBy specification, a single node handles the LoadBalancer service traffic. Use the nodeSelector field to limit which node can be assigned this task. When a node is selected to handle the service traffic, OVN-Kubernetes labels the node in the following format: egress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "" . When the nodeSelector field is not specified, any node can manage the LoadBalancer service traffic. 5 Optional: Specify the routing table for egress traffic. If you do not include the network specification, the egress service uses the default host network. Example egress service specification apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: test-egress-service namespace: test-namespace spec: sourceIPBy: "LoadBalancerIP" nodeSelector: matchLabels: vrf: "true" network: "2" 26.19.2. Deploying an egress service You can deploy an egress service to manage egress traffic for pods behind a LoadBalancer service. The following example configures the egress traffic to have the same source IP address as the ingress IP address of the LoadBalancer service. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You configured MetalLB BGPPeer resources. Procedure Create an IPAddressPool CR with the desired IP for the service: Create a file, such as ip-addr-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example-pool namespace: metallb-system spec: addresses: - 172.19.0.100/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f ip-addr-pool.yaml Create Service and EgressService CRs: Create a file, such as service-egress-service.yaml , with content like the following example: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace annotations: metallb.universe.tf/address-pool: example-pool 1 spec: selector: app: example ports: - name: http protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: example-service namespace: example-namespace spec: sourceIPBy: "LoadBalancerIP" 2 nodeSelector: 3 matchLabels: node-role.kubernetes.io/worker: "" 1 The LoadBalancer service uses the IP address assigned by MetalLB from the example-pool IP address pool. 2 This example uses the LoadBalancerIP value to assign the ingress IP address of the LoadBalancer service as the source IP address of egress traffic. 3 When you specify the LoadBalancerIP value, a single node handles the LoadBalancer service's traffic. In this example, only nodes with the worker label can be selected to handle the traffic. When a node is selected, OVN-Kubernetes labels the node in the following format egress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "" . Note If you use the sourceIPBy: "LoadBalancerIP" setting, you must specify the load-balancer node in the BGPAdvertisement custom resource (CR). Apply the configuration for the service and egress service by running the following command: USD oc apply -f service-egress-service.yaml Create a BGPAdvertisement CR to advertise the service: Create a file, such as service-bgp-advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example-bgp-adv namespace: metallb-system spec: ipAddressPools: - example-pool nodeSelector: - matchLabels: egress-service.k8s.ovn.org/example-namespace-example-service: "" 1 1 In this example, the EgressService CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod. Verification Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command: USD curl <external_ip_address>:<port_number> 1 1 Update the external IP address and port number to suit your application endpoint. If you assigned the LoadBalancer service's ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as tcpdump to analyze packets received at the external client. Additional resources Exposing a service through a network VRF Example: Network interface with a VRF instance node network configuration policy Managing symmetric routing with MetalLB About virtual routing and forwarding 26.20. Considerations for the use of an egress router pod 26.20.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 26.20.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Note The egress router CNI plugin supports redirect mode only. This is a difference with the egress router implementation that you can deploy with OpenShift SDN. Unlike the egress router for OpenShift SDN, the egress router CNI plugin does not support HTTP proxy mode or DNS proxy mode. 26.20.1.2. Egress router pod implementation The egress router implementation uses the egress router Container Network Interface (CNI) plugin. The plugin adds a secondary network interface to a pod. An egress router is a pod that has two network interfaces. For example, the pod can have eth0 and net1 network interfaces. The eth0 interface is on the cluster network and the pod continues to use the interface for ordinary cluster-related network traffic. The net1 interface is on a secondary network and has an IP address and gateway for that network. Other pods in the OpenShift Container Platform cluster can access the egress router service and the service enables the pods to access external services. The egress router acts as a bridge between pods and an external system. Traffic that leaves the egress router exits through a node, but the packets have the MAC address of the net1 interface from the egress router pod. When you add an egress router custom resource, the Cluster Network Operator creates the following objects: The network attachment definition for the net1 secondary network interface of the pod. A deployment for the egress router. If you delete an egress router custom resource, the Operator deletes the two objects in the preceding list that are associated with the egress router. 26.20.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 26.20.1.4. Failover configuration To avoid downtime, the Cluster Network Operator deploys the egress router pod as a deployment resource. The deployment name is egress-router-cni-deployment . The pod that corresponds to the deployment has a label of app=egress-router-cni . To create a new service for the deployment, use the oc expose deployment/egress-router-cni-deployment --port <port_number> command or create a file like the following example: apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni 26.20.2. Additional resources Deploying an egress router in redirection mode 26.21. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod to redirect traffic to specified destination IP addresses from a reserved source IP address. The egress router implementation uses the egress router Container Network Interface (CNI) plugin. 26.21.1. Egress router custom resource Define the configuration for an egress router pod in an egress router custom resource. The following YAML describes the fields for the configuration of an egress router in redirect mode: apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> 1 spec: addresses: [ 2 { ip: "<egress_router>", 3 gateway: "<egress_gateway>" 4 } ] mode: Redirect redirect: { redirectRules: [ 5 { destinationIP: "<egress_destination>", port: <egress_router_port>, targetPort: <target_port>, 6 protocol: <network_protocol> 7 }, ... ], fallbackIP: "<egress_destination>" 8 } 1 Optional: The namespace field specifies the namespace to create the egress router in. If you do not specify a value in the file or on the command line, the default namespace is used. 2 The addresses field specifies the IP addresses to configure on the secondary network interface. 3 The ip field specifies the reserved source IP address and netmask from the physical network that the node is on to use with egress router pod. Use CIDR notation to specify the IP address and netmask. 4 The gateway field specifies the IP address of the network gateway. 5 Optional: The redirectRules field specifies a combination of egress destination IP address, egress router port, and protocol. Incoming connections to the egress router on the specified port and protocol are routed to the destination IP address. 6 Optional: The targetPort field specifies the network port on the destination IP address. If this field is not specified, traffic is routed to the same network port that it arrived on. 7 The protocol field supports TCP, UDP, or SCTP. 8 Optional: The fallbackIP field specifies a destination IP address. If you do not specify any redirect rules, the egress router sends all traffic to this fallback IP address. If you specify redirect rules, any connections to network ports that are not defined in the rules are sent by the egress router to this fallback IP address. If you do not specify this field, the egress router rejects connections to network ports that are not defined in the rules. Example egress router specification apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: "Bridge" } } addresses: [ { ip: "192.168.12.99/24", gateway: "192.168.12.1" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: "10.0.0.99", port: 80, protocol: UDP }, { destinationIP: "203.0.113.26", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: "203.0.113.27", port: 8443, targetPort: 443, protocol: TCP } ] } 26.21.2. Deploying an egress router in redirect mode You can deploy an egress router to redirect traffic from its own reserved source IP address to one or more destination IP addresses. After you add an egress router, the client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router definition. To ensure that other pods can find the IP address of the egress router pod, create a service that uses the egress router, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni 1 1 Specify the label for the egress router. The value shown is added by the Cluster Network Operator and is not configurable. After you create the service, your pods can connect to the service. The egress router pod redirects traffic to the corresponding port on the destination IP address. The connections originate from the reserved source IP address. Verification To verify that the Cluster Network Operator started the egress router, complete the following procedure: View the network attachment definition that the Operator created for the egress router: USD oc get network-attachment-definition egress-router-cni-nad The name of the network attachment definition is not configurable. Example output NAME AGE egress-router-cni-nad 18m View the deployment for the egress router pod: USD oc get deployment egress-router-cni-deployment The name of the deployment is not configurable. Example output NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m View the status of the egress router pod: USD oc get pods -l app=egress-router-cni Example output NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m View the logs and the routing table for the egress router pod. Get the node name for the egress router pod: USD POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath="{.items[0].spec.nodeName}") Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/USDPOD_NODENAME Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the host in /host within the pod. By changing the root directory to /host , you can run binaries from the executable paths of the host: # chroot /host From within the chroot environment console, display the egress router logs: # cat /tmp/egress-router-log Example output 2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to "net1" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99 The logging file location and logging level are not configurable when you start the egress router by creating an EgressRouter object as described in this procedure. From within the chroot environment console, get the container ID: # crictl ps --name egress-router-cni-pod | awk '{print USD1}' Example output CONTAINER bac9fae69ddb6 Determine the process ID of the container. In this example, the container ID is bac9fae69ddb6 : # crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}' Example output 68857 Enter the network namespace of the container: # nsenter -n -t 68857 Display the routing table: # ip route In the following example output, the net1 network interface is the default route. Traffic for the cluster network uses the eth0 network interface. Traffic for the 192.168.12.0/24 network uses the net1 network interface and originates from the reserved source IP address 192.168.12.99 . The pod routes all other traffic to the gateway at IP address 192.168.12.1 . Routing for the service network is not shown. Example output default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1 26.22. Enabling multicast for a project 26.22.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis. 26.22.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: "true" Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 26.23. Disabling multicast for a project 26.23.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate namespace <namespace> \ 1 k8s.ovn.org/multicast-enabled- 1 The namespace for the project you want to disable multicast for. Tip You can alternatively apply the following YAML to delete the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null 26.24. Tracking network flows As a cluster administrator, you can collect information about pod network flows from your cluster to assist with the following areas: Monitor ingress and egress traffic on the pod network. Troubleshoot performance issues. Gather data for capacity planning and security audits. When you enable the collection of the network flows, only the metadata about the traffic is collected. For example, packet data is not collected, but the protocol, source address, destination address, port numbers, number of bytes, and other packet-level information is collected. The data is collected in one or more of the following record formats: NetFlow sFlow IPFIX When you configure the Cluster Network Operator (CNO) with one or more collector IP addresses and port numbers, the Operator configures Open vSwitch (OVS) on each node to send the network flows records to each collector. You can configure the Operator to send records to more than one type of network flow collector. For example, you can send records to NetFlow collectors and also send records to sFlow collectors. When OVS sends data to the collectors, each type of collector receives identical records. For example, if you configure two NetFlow collectors, OVS on a node sends identical records to the two collectors. If you also configure two sFlow collectors, the two sFlow collectors receive identical records. However, each collector type has a unique record format. Collecting the network flows data and sending the records to collectors affects performance. Nodes process packets at a slower rate. If the performance impact is too great, you can delete the destinations for collectors to disable collecting network flows data and restore performance. Note Enabling network flow collectors might have an impact on the overall performance of the cluster network. 26.24.1. Network object configuration for tracking network flows The fields for configuring network flows collectors in the Cluster Network Operator (CNO) are shown in the following table: Table 26.22. Network flows configuration Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.exportNetworkFlows object One or more of netFlow , sFlow , or ipfix . spec.exportNetworkFlows.netFlow.collectors array A list of IP address and network port pairs for up to 10 collectors. spec.exportNetworkFlows.sFlow.collectors array A list of IP address and network port pairs for up to 10 collectors. spec.exportNetworkFlows.ipfix.collectors array A list of IP address and network port pairs for up to 10 collectors. After applying the following manifest to the CNO, the Operator configures Open vSwitch (OVS) on each node in the cluster to send network flows records to the NetFlow collector that is listening at 192.168.1.99:2056 . Example configuration for tracking network flows apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056 26.24.2. Adding destinations for network flows collectors As a cluster administrator, you can configure the Cluster Network Operator (CNO) to send network flows metadata about the pod network to a network flows collector. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You have a network flows collector and know the IP address and port that it listens on. Procedure Create a patch file that specifies the network flows collector type and the IP address and port information of the collectors: spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056 Configure the CNO with the network flows collectors: USD oc patch network.operator cluster --type merge -p "USD(cat <file_name>.yaml)" Example output network.operator.openshift.io/cluster patched Verification Verification is not typically necessary. You can run the following command to confirm that Open vSwitch (OVS) on each node is configured to send network flows records to one or more collectors. View the Operator configuration to confirm that the exportNetworkFlows field is configured: USD oc get network.operator cluster -o jsonpath="{.spec.exportNetworkFlows}" Example output {"netFlow":{"collectors":["192.168.1.99:2056"]}} View the network flows configuration in OVS from each node: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{"\n"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDpod \ -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done Example output ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"]- ... 26.24.3. Deleting all destinations for network flows collectors As a cluster administrator, you can configure the Cluster Network Operator (CNO) to stop sending network flows metadata to a network flows collector. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Remove all network flows collectors: USD oc patch network.operator cluster --type='json' \ -p='[{"op":"remove", "path":"/spec/exportNetworkFlows"}]' Example output network.operator.openshift.io/cluster patched 26.24.4. Additional resources Network [operator.openshift.io/v1 ] 26.25. Configuring hybrid networking As a cluster administrator, you can configure the Red Hat OpenShift Networking OVN-Kubernetes network plugin to allow Linux and Windows nodes to host Linux and Windows workloads, respectively. 26.25.1. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To configure the OVN-Kubernetes hybrid network overlay, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "hybridOverlayConfig":{ "hybridClusterNetwork":[ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "hybridOverlayVXLANPort": <overlay_port> } } } } }' where: cidr Specify the CIDR configuration used for nodes on the additional overlay network. This CIDR must not overlap with the cluster network CIDR. hostPrefix Specifies the subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. hybridOverlayVXLANPort Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Example output network.operator.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get network.operator.openshift.io -o jsonpath="{.items[0].spec.defaultNetwork.ovnKubernetesConfig}" 26.25.2. Additional resources Understanding Windows container workloads Enabling Windows container workloads Installing a cluster on AWS with network customizations Installing a cluster on Azure with network customizations
[ "I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4", "I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface", "oc get all,ep,cm -n openshift-ovn-kubernetes", "Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME READY STATUS RESTARTS AGE pod/ovnkube-control-plane-65c6f55656-6d55h 2/2 Running 0 114m pod/ovnkube-control-plane-65c6f55656-fd7vw 2/2 Running 2 (104m ago) 114m pod/ovnkube-node-bcvts 8/8 Running 0 113m pod/ovnkube-node-drgvv 8/8 Running 0 113m pod/ovnkube-node-f2pxt 8/8 Running 0 113m pod/ovnkube-node-frqsb 8/8 Running 0 105m pod/ovnkube-node-lbxkk 8/8 Running 0 105m pod/ovnkube-node-tt7bx 8/8 Running 1 (102m ago) 105m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-control-plane ClusterIP None <none> 9108/TCP 114m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 114m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-node 6 6 6 6 6 beta.kubernetes.io/os=linux 114m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ovnkube-control-plane 3/3 3 3 114m NAME DESIRED CURRENT READY AGE replicaset.apps/ovnkube-control-plane-65c6f55656 3 3 3 114m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-control-plane 10.0.0.3:9108,10.0.0.4:9108,10.0.0.5:9108 114m endpoints/ovn-kubernetes-node 10.0.0.3:9105,10.0.0.4:9105,10.0.0.5:9105 + 9 more... 114m NAME DATA AGE configmap/control-plane-status 1 113m configmap/kube-root-ca.crt 1 114m configmap/openshift-service-ca.crt 1 114m configmap/ovn-ca 1 114m configmap/ovnkube-config 1 114m configmap/signer-ca 1 114m", "oc get pods ovnkube-node-bcvts -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes", "ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller", "oc get pods ovnkube-control-plane-65c6f55656-6d55h -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes", "kube-rbac-proxy ovnkube-cluster-manager", "oc get po -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m", "oc get pods -n openshift-ovn-kubernetes -owide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>", "oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2", "ovn-nbctl show", "oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c northd -- ovn-nbctl lr-list", "45339f4f-7d0b-41d0-b5f9-9fca9ce40ce6 (GR_ci-ln-t487nnb-72292-mdcnq-master-2) 96a0a0f0-e7ed-4fec-8393-3195563de1b8 (ovn_cluster_router)", "oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb -- ovn-nbctl ls-list", "bdd7dc3d-d848-4a74-b293-cc15128ea614 (ci-ln-t487nnb-72292-mdcnq-master-2) b349292d-ee03-4914-935f-1940b6cb91e5 (ext_ci-ln-t487nnb-72292-mdcnq-master-2) 0aac0754-ea32-4e33-b086-35eeabf0a140 (join) 992509d7-2c3f-4432-88db-c179e43592e5 (transit_switch)", "oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb -- ovn-nbctl lb-list", "UUID LB PROTO VIP IPs 7c84c673-ed2a-4436-9a1f-9bc5dd181eea Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,169.254.169.2:6443,10.0.0.5:6443 4d663fd9-ddc8-4271-b333-4c0e279e20bb Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,10.0.0.4:6443,10.0.0.5:6443 292eb07f-b82f-4962-868a-4f541d250bca Service_openshif tcp 172.30.105.247:443 10.129.0.12:8443 034b5a7f-bb6a-45e9-8e6d-573a82dc5ee3 Service_openshif tcp 172.30.192.38:443 10.0.0.3:10259,10.0.0.4:10259,10.0.0.5:10259 a68bb53e-be84-48df-bd38-bdd82fcd4026 Service_openshif tcp 172.30.161.125:8443 10.129.0.32:8443 6cc21b3d-2c54-4c94-8ff5-d8e017269c2e Service_openshif tcp 172.30.3.144:443 10.129.0.22:8443 37996ffd-7268-4862-a27f-61cd62e09c32 Service_openshif tcp 172.30.181.107:443 10.129.0.18:8443 81d4da3c-f811-411f-ae0c-bc6713d0861d Service_openshif tcp 172.30.228.23:443 10.129.0.29:8443 ac5a4f3b-b6ba-4ceb-82d0-d84f2c41306e Service_openshif tcp 172.30.14.240:9443 10.129.0.36:9443 c88979fb-1ef5-414b-90ac-43b579351ac9 Service_openshif tcp 172.30.231.192:9001 10.128.0.5:9001,10.128.2.5:9001,10.129.0.5:9001,10.129.2.4:9001,10.130.0.3:9001,10.131.0.3:9001 fcb0a3fb-4a77-4230-a84a-be45dce757e8 Service_openshif tcp 172.30.189.92:443 10.130.0.17:8440 67ef3e7b-ceb9-4bf0-8d96-b43bde4c9151 Service_openshif tcp 172.30.67.218:443 10.129.0.9:8443 d0032fba-7d5e-424a-af25-4ab9b5d46e81 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,10.0.0.4:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,10.0.0.4:9979,10.0.0.5:9979 7361c537-3eec-4e6c-bc0c-0522d182abd4 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,10.0.0.4:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 0296c437-1259-410b-a6fd-81c310ad0af5 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,169.254.169.2:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 5d5679f5-45b8-479d-9f7c-08b123c688b8 Service_openshif tcp 172.30.38.253:17698 10.128.0.52:17698,10.129.0.84:17698,10.130.0.60:17698 2adcbab4-d1c9-447d-9573-b5dc9f2efbfa Service_openshif tcp 172.30.148.52:443 10.0.0.4:9202,10.0.0.5:9202 tcp 172.30.148.52:444 10.0.0.4:9203,10.0.0.5:9203 tcp 172.30.148.52:445 10.0.0.4:9204,10.0.0.5:9204 tcp 172.30.148.52:446 10.0.0.4:9205,10.0.0.5:9205 2a33a6d7-af1b-4892-87cc-326a380b809b Service_openshif tcp 172.30.67.219:9091 10.129.2.16:9091,10.131.0.16:9091 tcp 172.30.67.219:9092 10.129.2.16:9092,10.131.0.16:9092 tcp 172.30.67.219:9093 10.129.2.16:9093,10.131.0.16:9093 tcp 172.30.67.219:9094 10.129.2.16:9094,10.131.0.16:9094 f56f59d7-231a-4974-99b3-792e2741ec8d Service_openshif tcp 172.30.89.212:443 10.128.0.41:8443,10.129.0.68:8443,10.130.0.44:8443 08c2c6d7-d217-4b96-b5d8-c80c4e258116 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,169.254.169.2:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,169.254.169.2:9979,10.0.0.5:9979 60a69c56-fc6a-4de6-bd88-3f2af5ba5665 Service_openshif tcp 172.30.10.193:443 10.129.0.25:8443 ab1ef694-0826-4671-a22c-565fc2d282ec Service_openshif tcp 172.30.196.123:443 10.128.0.33:8443,10.129.0.64:8443,10.130.0.37:8443 b1fb34d3-0944-4770-9ee3-2683e7a630e2 Service_openshif tcp 172.30.158.93:8443 10.129.0.13:8443 95811c11-56e2-4877-be1e-c78ccb3a82a9 Service_openshif tcp 172.30.46.85:9001 10.130.0.16:9001 4baba1d1-b873-4535-884c-3f6fc07a50fd Service_openshif tcp 172.30.28.87:443 10.129.0.26:8443 6c2e1c90-f0ca-484e-8a8e-40e71442110a Service_openshif udp 172.30.0.10:53 10.128.0.13:5353,10.128.2.6:5353,10.129.0.39:5353,10.129.2.6:5353,10.130.0.11:5353,10.131.0.9:5353", "oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb ovn-nbctl --help", "oc get po -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m", "oc get pods -n openshift-ovn-kubernetes -owide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>", "oc rsh -c sbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2", "ovn-sbctl show", "Chassis \"5db31703-35e9-413b-8cdf-69e7eecb41f7\" hostname: ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Encap geneve ip: \"10.0.128.4\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Chassis \"070debed-99b7-4bce-b17d-17e720b7f8bc\" hostname: ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Encap geneve ip: \"10.0.128.2\" options: {csum=\"true\"} Port_Binding k8s-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding rtoe-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-monitoring_alertmanager-main-1 Port_Binding rtoj-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding etor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding cr-rtos-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-e2e-loki_loki-promtail-qcrcz Port_Binding jtor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-multus_network-metrics-daemon-mkd4t Port_Binding openshift-ingress-canary_ingress-canary-xtvj4 Port_Binding openshift-ingress_router-default-6c76cbc498-pvlqk Port_Binding openshift-dns_dns-default-zz582 Port_Binding openshift-monitoring_thanos-querier-57585899f5-lbf4f Port_Binding openshift-network-diagnostics_network-check-target-tn228 Port_Binding openshift-monitoring_prometheus-k8s-0 Port_Binding openshift-image-registry_image-registry-68899bd877-xqxjj Chassis \"179ba069-0af1-401c-b044-e5ba90f60fea\" hostname: ci-ln-9gp362t-72292-v2p94-master-0 Encap geneve ip: \"10.0.0.5\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-0 Chassis \"68c954f2-5a76-47be-9e84-1cb13bd9dab9\" hostname: ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Encap geneve ip: \"10.0.128.3\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Chassis \"2de65d9e-9abf-4b6e-a51d-a1e038b4d8af\" hostname: ci-ln-9gp362t-72292-v2p94-master-2 Encap geneve ip: \"10.0.0.4\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-2 Chassis \"1d371cb8-5e21-44fd-9025-c4b162cc4247\" hostname: ci-ln-9gp362t-72292-v2p94-master-1 Encap geneve ip: \"10.0.0.3\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-1", "oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c sbdb ovn-sbctl --help", "git clone [email protected]:openshift/network-tools.git", "cd network-tools", "./debug-scripts/network-tools -h", "./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list", "944a7b53-7948-4ad2-a494-82b55eeccf87 (GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99) 84bd4a4c-4b0b-4a47-b0cf-a2c32709fc53 (ovn_cluster_router)", "./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=localnet", "_uuid : d05298f5-805b-4838-9224-1211afc2f199 additional_chassis : [] additional_encap : [] chassis : [] datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [unknown] mirror_rules : [] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...]", "./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=l3gateway", "_uuid : 5207a1f3-1cf3-42f1-83e9-387bbb06b03c additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [\"42:01:0a:00:80:04\"] mirror_rules : [] nat_addresses : [\"42:01:0a:00:80:04 10.0.128.4\"] options : {l3gateway-chassis=\"84737c36-b383-4c83-92c5-2bd5b3c7e772\", peer=rtoe-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : 6088d647-84f2-43f2-b53f-c9d379042679 additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : dc9cea00-d94a-41b8-bdb0-89d42d13aa2e encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [] options : {l3gateway-chassis=\"84737c36-b383-4c83-92c5-2bd5b3c7e772\", peer=rtoj-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : l3gateway up : true virtual_parent : [] [...]", "./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=patch", "_uuid : 785fb8b6-ee5a-4792-a415-5b1cb855dac2 additional_chassis : [] additional_encap : [] chassis : [] datapath : f1ddd1cc-dc0d-43b4-90ca-12651305acec encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : stor-ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [\"0a:58:0a:80:02:01 10.128.2.1 is_chassis_resident(\\\"cr-rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99\\\")\"] options : {peer=rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] _uuid : c01ff587-21a5-40b4-8244-4cd0425e5d9a additional_chassis : [] additional_encap : [] chassis : [] datapath : f6795586-bf92-4f84-9222-efe4ac6a7734 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtoj-ovn_cluster_router mac : [\"0a:58:64:40:00:01 100.64.0.1/16\"] mirror_rules : [] nat_addresses : [] options : {peer=jtor-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] [...]", "oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'", "oc get events -n openshift-ovn-kubernetes", "oc describe pod ovnkube-node-9lqfk -n openshift-ovn-kubernetes", "oc get co/network -o json | jq '.status.conditions[]'", "for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; done", "ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring -o jsonpath='{@.spec.host}')", "curl -s -k -H \"Authorization: Bearer USD(oc create token prometheus-k8s -n openshift-monitoring)\" https://USDALERT_MANAGER/api/v1/alerts | jq '.data[] | \"\\(.labels.severity) \\(.labels.alertname) \\(.labels.pod) \\(.labels.container) \\(.labels.endpoint) \\(.labels.instance)\"'", "oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains(\"ovn\")) or (.name|contains(\"OVN\")) or (.name|contains(\"Ovn\")) or (.name|contains(\"North\")) or (.name|contains(\"South\"))) and .type==\"alerting\")'", "oc logs -f <pod_name> -c <container_name> -n <namespace>", "oc logs ovnkube-node-5dx44 -n openshift-ovn-kubernetes", "oc logs -f ovnkube-node-5dx44 -c ovnkube-controller -n openshift-ovn-kubernetes", "for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done", "oc logs -l app=ovnkube-node -n openshift-ovn-kubernetes --all-containers --tail 5", "oc get po -o wide -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-65497d4548-9ptdr 2/2 Running 2 (128m ago) 147m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-control-plane-65497d4548-j6zfk 2/2 Running 0 147m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-5dx44 8/8 Running 0 146m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-node-dpfn4 8/8 Running 0 146m 10.0.0.4 ci-ln-3njdr9b-72292-5nwkp-master-1 <none> <none> ovnkube-node-kwc9l 8/8 Running 0 134m 10.0.128.2 ci-ln-3njdr9b-72292-5nwkp-worker-a-2fjcj <none> <none> ovnkube-node-mcrhl 8/8 Running 0 134m 10.0.128.4 ci-ln-3njdr9b-72292-5nwkp-worker-c-v9x5v <none> <none> ovnkube-node-nsct4 8/8 Running 0 146m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-zrj9f 8/8 Running 0 134m 10.0.128.3 ci-ln-3njdr9b-72292-5nwkp-worker-b-v78h7 <none> <none>", "kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ci-ln-3njdr9b-72292-5nwkp-master-0: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ci-ln-3njdr9b-72292-5nwkp-master-2: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg", "oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml", "configmap/env-overrides.yaml created", "oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-0 -l app=ovnkube-node", "oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-2 -l app=ovnkube-node", "oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-node", "oc logs -n openshift-ovn-kubernetes --all-containers --prefix ovnkube-node-<xxxx> | grep -E -m 10 '(Logging config:|vconsole|DBG)'", "[pod/ovnkube-node-2cpjc/sbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb [pod/ovnkube-node-2cpjc/ovnkube-controller] I1012 14:39:59.984506 35767 config.go:2247] Logging config: {File: CNIFile:/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:5 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} [pod/ovnkube-node-2cpjc/northd] + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 [pod/ovnkube-node-2cpjc/nbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.552Z|00002|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00003|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (64 nodes total across 64 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00004|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 7 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00005|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering BACKOFF [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00007|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering CONNECTING [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00008|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: SERVER_SCHEMA_REQUESTED -> SERVER_SCHEMA_REQUESTED at lib/ovsdb-cs.c:423", "for f in USD(oc -n openshift-ovn-kubernetes get po -l 'app=ovnkube-node' --no-headers -o custom-columns=N:.metadata.name) ; do echo \"---- USDf ----\" ; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDf -- pgrep -a -f init-ovnkube-controller | grep -P -o '^.*loglevel\\s+\\d' ; done", "---- ovnkube-node-2dt57 ---- 60981 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-c-vmh5n.c.openshift-qe.internal --init-node xpst8-worker-c-vmh5n.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-4zznh ---- 178034 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-2.c.openshift-qe.internal --init-node xpst8-master-2.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-548sx ---- 77499 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-a-fjtnb.c.openshift-qe.internal --init-node xpst8-worker-a-fjtnb.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-6btrf ---- 73781 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-b-p8rww.c.openshift-qe.internal --init-node xpst8-worker-b-p8rww.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-fkc9r ---- 130707 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-0.c.openshift-qe.internal --init-node xpst8-master-0.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 5 ---- ovnkube-node-tk9l4 ---- 181328 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-1.c.openshift-qe.internal --init-node xpst8-master-1.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4", "oc get podnetworkconnectivitychecks -n openshift-network-diagnostics", "oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'", "oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'", "oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'", "oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'", "oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: \"deny-all-ingress-tenant-1\" 5 action: \"Deny\" from: - pods: namespaces: 6 namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 7 egress: 8 - name: \"pass-all-egress-to-tenant-1\" action: \"Pass\" to: - pods: namespaces: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} ingress: - name: \"allow-ingress-from-monitoring\" action: \"Allow\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: \"pass-ingress-from-monitoring\" action: \"Pass\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: \"deny-all-ingress-from-tenant-1\" 4 action: \"Deny\" from: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: \"allow-all-egress-to-tenant-1\" action: \"Allow\" to: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring", "POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-control-plane -o name | head -1 | awk -F '/' '{print USDNF}')", "oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace -c ovnkube-cluster-manager ovnkube-trace", "chmod +x ovnkube-trace", "./ovnkube-trace -help", "Usage of ./ovnkube-trace: -addr-family string Address family (ip4 or ip6) to be used for tracing (default \"ip4\") -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default \"default\") -dst-port string dst-port: destination port (default \"80\") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default \"0\") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default \"default\") -tcp use tcp transport protocol -udp use udp transport protocol", "oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels=\"app=web\" --expose --port=80", "get pods -n openshift-dns", "NAME READY STATUS RESTARTS AGE dns-default-8s42x 2/2 Running 0 5h8m dns-default-mdw6r 2/2 Running 0 4h58m dns-default-p8t5h 2/2 Running 0 4h58m dns-default-rl6nk 2/2 Running 0 5h8m dns-default-xbgqx 2/2 Running 0 5h8m dns-default-zv8f6 2/2 Running 0 4h58m node-resolver-62jjb 1/1 Running 0 5h8m node-resolver-8z4cj 1/1 Running 0 4h59m node-resolver-bq244 1/1 Running 0 5h8m node-resolver-hc58n 1/1 Running 0 4h59m node-resolver-lm6z4 1/1 Running 0 5h8m node-resolver-zfx5k 1/1 Running 0 5h", "./ovnkube-trace -src-namespace default \\ 1 -src web \\ 2 -dst-namespace openshift-dns \\ 3 -dst dns-default-p8t5h \\ 4 -udp -dst-port 53 \\ 5 -loglevel 0 6", "ovn-trace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-trace destination pod to source pod indicates success from dns-default-p8t5h to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-p8t5h ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-p8t5h to web ovn-detrace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-detrace destination pod to source pod indicates success from dns-default-p8t5h to web", "ovn-trace source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace (remote) source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-trace (remote) destination pod to source pod indicates success from dns-default-8s42x to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-8s42x ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-detrace source pod to destination pod indicates success from web to dns-default-8s42x ovn-detrace destination pod to source pod indicates success from dns-default-8s42x to web", "./ovnkube-trace -src-namespace default -src web -dst-namespace openshift-dns -dst dns-default-467qw -udp -dst-port 53 -loglevel 2", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: []", "oc apply -f deny-by-default.yaml", "networkpolicy.networking.k8s.io/deny-by-default created", "oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh", "./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0", "ovn-trace source pod to destination pod indicates failure from test-6459 to web", "./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 2", "------------------------------------------------ 3. ls_out_acl_hint (northd.c:7454): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 12efc456 reg0[8] = 1; reg0[10] = 1; next; 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 0, priority 500, uuid 69372c5d reg8[30..31] = 1; next(4); 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 1, priority 500, uuid 2fa0af89 reg8[30..31] = 2; next(4); 4. ls_out_acl_eval (northd.c:7691): reg8[30..31] == 2 && reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid 447d0dab reg8[17] = 1; ct_commit { ct_mark.blocked = 1; }; 1 next;", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production", "oc apply -f web-allow-prod.yaml", "./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0", "ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml", "#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'", "oc get nncp", "NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured", "oc delete nncp <nncp_manifest_filename>", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'", "oc get mcp", "oc get co", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'", "oc get mcp", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc get co", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'", "oc delete namespace openshift-sdn", "oc get egressnetworkpolicies.network.openshift.io -A", "NAMESPACE NAME AGE <namespace> <example_egressfirewall> 5d", "oc get egressnetworkpolicy <example_egressfirewall> -n <namespace> -o yaml", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <example_egress_policy> namespace: <namespace> spec: egress: - type: Allow to: cidrSelector: 0.0.0.0/0 - type: Deny to: cidrSelector: 10.0.0.0/8", "oc get netnamespace -A | awk 'USD3 != \"\"'", "NAME NETID EGRESS IPS namespace1 14173093 [\"10.0.158.173\"] namespace2 14173020 [\"10.0.158.173\"]", "oc get netnamespace -o json | jq -r '.items[] | select(.metadata.annotations.\"netnamespace.network.openshift.io/multicast-enabled\" == \"true\") | .metadata.name'", "namespace1 namespace3", "oc get networkpolicy -n <namespace>", "NAME POD-SELECTOR AGE allow-multicast app=my-app 11m", "The cluster configuration is invalid (network type limited live migration is not supported for pods with `pod.network.openshift.io/assign-macvlan` annotation. Please remove all egress router pods). Use `oc edit network.config.openshift.io cluster` to fix.", "oc get pods --all-namespaces -o json | jq '.items[] | select(.metadata.annotations.\"pod.network.openshift.io/assign-macvlan\" == \"true\") | {name: .metadata.name, namespace: .metadata.namespace}'", "{ \"name\": \"egress-multi\", \"namespace\": \"egress-router-project\" }", "oc delete pod <egress_pod_name> -n <egress_router_project>", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{\"metadata\":{\"annotations\":{\"network.openshift.io/network-type-migration\":\"\"}},\"spec\":{\"networkType\":\"OVNKubernetes\"}}'", "oc get network.config.openshift.io cluster -o jsonpath='{.status.networkType}'", "oc get network.config cluster -o=jsonpath='{.status.conditions}' | jq .", "oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"ipv4\":{\"internalJoinSubnet\": \"100.63.0.0/16\"}}}}}'", "oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"ipv4\":{\"internalTransitSwitchSubnet\": \"100.99.0.0/16\"}}}}}'", "oc get egressfirewalls.k8s.ovn.org -A", "NAMESPACE NAME AGE <namespace> <example_egressfirewall> 5d", "oc get egressfirewall <example_egressfirewall> -n <namespace> -o yaml", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <example_egress_policy> namespace: <namespace> spec: egress: - type: Allow to: cidrSelector: 192.168.0.0/16 - type: Deny to: cidrSelector: 0.0.0.0/0", "oc get egressip", "NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egress-sample 192.0.2.10 ip-10-0-42-79.us-east-2.compute.internal 192.0.2.10 egressip-sample-2 192.0.2.14 ip-10-0-42-79.us-east-2.compute.internal 192.0.2.14", "oc get egressip <egressip_name> -o yaml", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"k8s.ovn.org/v1\",\"kind\":\"EgressIP\",\"metadata\":{\"annotations\":{},\"name\":\"egressip-sample\"},\"spec\":{\"egressIPs\":[\"192.0.2.12\",\"192.0.2.13\"],\"namespaceSelector\":{\"matchLabels\":{\"name\":\"my-namespace\"}}}} creationTimestamp: \"2024-06-27T15:48:36Z\" generation: 7 name: egressip-sample resourceVersion: \"125511578\" uid: b65833c8-781f-4cc9-bc96-d970259a7631 spec: egressIPs: - 192.0.2.12 - 192.0.2.13 namespaceSelector: matchLabels: name: my-namespace", "oc get namespace -o json | jq -r '.items[] | select(.metadata.annotations.\"k8s.ovn.org/multicast-enabled\" == \"true\") | .metadata.name'", "namespace1 namespace3", "oc describe namespace <namespace>", "Name: my-namespace Labels: kubernetes.io/metadata.name=my-namespace pod-security.kubernetes.io/audit=restricted pod-security.kubernetes.io/audit-version=v1.24 pod-security.kubernetes.io/warn=restricted pod-security.kubernetes.io/warn-version=v1.24 Annotations: k8s.ovn.org/multicast-enabled: true openshift.io/sa.scc.mcs: s0:c25,c0 openshift.io/sa.scc.supplemental-groups: 1000600000/10000 openshift.io/sa.scc.uid-range: 1000600000/10000 Status: Active", "oc get networkpolicy -n <namespace>", "NAME POD-SELECTOR AGE allow-multicast app=my-app 11m", "oc describe networkpolicy allow-multicast -n <namespace>", "Name: allow-multicast Namespace: my-namespace Created on: 2024-07-24 14:55:03 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: app=my-app Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: IPBlock: CIDR: 224.0.0.0/4 Except: Allowing egress traffic: To Port: <any> (traffic allowed to all ports) To: IPBlock: CIDR: 224.0.0.0/4 Except: Policy Types: Ingress, Egress", "oc create token prometheus-k8s -n openshift-monitoring", "eyJhbGciOiJSUzI1NiIsImtpZCI6IlZiSUtwclcwbEJ2VW9We", "oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: <eyJhbGciOiJSUzI1NiIsImtpZCI6IlZiSUtwclcwbEJ2VW9We...>\" \"https://<openshift_API_endpoint>\" --data-urlencode \"query=openshift_network_operator_live_migration_condition\" | jq", "\"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"openshift_network_operator_live_migration_condition\", \"container\": \"network-operator\", \"endpoint\": \"metrics\", \"instance\": \"10.0.83.62:9104\", \"job\": \"metrics\", \"namespace\": \"openshift-network-operator\", \"pod\": \"network-operator-6c87754bc6-c8qld\", \"prometheus\": \"openshift-monitoring/k8s\", \"service\": \"metrics\", \"type\": \"NetworkTypeMigrationInProgress\" }, \"value\": [ 1717653579.587, \"1\" ] },", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc get Network.config cluster -o jsonpath='{.status.migration}'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'", "oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'", "#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml", "oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'", "oc delete namespace openshift-ovn-kubernetes", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{\"metadata\":{\"annotations\":{\"network.openshift.io/network-type-migration\":\"\"}},\"spec\":{\"networkType\":\"OpenShiftSDN\"}}'", "watch -n1 'oc get network.config/cluster -o json | jq \".status.conditions[]|\\\"\\\\(.type) \\\\(.status) \\\\(.reason) \\\\(.message)\\\"\" -r | column --table --table-columns NAME,STATUS,REASON,MESSAGE --table-columns-limit 4; echo; oc get mcp -o wide; echo; oc get node -o \"custom-columns=NAME:metadata.name,STATE:metadata.annotations.machineconfiguration\\\\.openshift\\\\.io/state,DESIRED:metadata.annotations.machineconfiguration\\\\.openshift\\\\.io/desiredConfig,CURRENT:metadata.annotations.machineconfiguration\\\\.openshift\\\\.io/currentConfig,REASON:metadata.annotations.machineconfiguration\\\\.openshift\\\\.io/reason\"'", "oc annotate network.config cluster network.openshift.io/network-type-migration-", "oc delete namespace openshift-ovn-kubernetes", "- op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2", "oc patch network.config.openshift.io cluster --type='json' --patch-file <file>.yaml", "network.config.openshift.io/cluster patched", "oc describe network", "Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112", "oc edit networks.config.openshift.io", "oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\": {\"ipv4\":{\"internalJoinSubnet\": \"<join_subnet>\"}, \"ipv6\":{\"internalJoinSubnet\": \"<join_subnet>\"}}}}}'", "network.operator.openshift.io/cluster patched", "oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork}\"", "{ \"ovnKubernetesConfig\": { \"ipv4\": { \"internalJoinSubnet\": \"100.64.1.0/16\" }, }, \"type\": \"OVNKubernetes\" }", "oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\": {\"ipv4\":{\"internalTransitSwitchSubnet\": \"<transit_subnet>\"}, \"ipv6\":{\"internalTransitSwitchSubnet\": \"<transit_subnet>\"}}}}}'", "network.operator.openshift.io/cluster patched", "oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork}\"", "{ \"ovnKubernetesConfig\": { \"ipv4\": { \"internalTransitSwitchSubnet\": \"100.88.1.0/16\" }, }, \"type\": \"OVNKubernetes\" }", "kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }", "<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name=\"<acl_name>\", verdict=\"<verdict>\", severity=\"<severity>\", direction=\"<direction>\": <flow>", "<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>", "2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "oc edit network.operator.openshift.io/cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF", "namespace/verify-audit-logging created", "cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF", "networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created", "cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF", "for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done", "pod/client created pod/server created", "POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')", "oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms", "oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0", "oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }", "namespace/verify-audit-logging annotated", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0", "oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null", "namespace/verify-audit-logging annotated", "oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"ipsecConfig\":{ \"mode\":<mode> }}}}}'", "oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node", "ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34m", "oc -n openshift-ovn-kubernetes rsh ovnkube-node-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec", "true", "oc get nodes", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: \"<hostname>\" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftrsasigkey: '%cert' leftcert: left_server leftmodecfgclient: false right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: transport", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: \"<hostname>\" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftmodecfgclient: false leftrsasigkey: '%cert' leftcert: left_server right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: tunnel", "oc create -f ipsec-config.yaml", "for role in master worker; do cat >> \"99-ipsec-USD{role}-endpoint-config.bu\" <<-EOF variant: openshift version: 4.15.0 metadata: name: 99-USD{role}-import-certs labels: machineconfiguration.openshift.io/role: USDrole systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target storage: files: - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo \"importing cert to NSS\" certutil -A -n \"CA\" -t \"CT,C,C\" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W \"\" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n \"left_server\" -t \"u,u,u\" -d /var/lib/ipsec/nss/ EOF done", "for role in master worker; do butane -d . 99-ipsec-USD{role}-endpoint-config.bu -o ./99-ipsec-USDrole-endpoint-config.yaml done", "for role in master worker; do oc apply -f 99-ipsec-USD{role}-endpoint-config.yaml done", "oc get mcp", "oc get mc | grep ipsec", "80-ipsec-master-extensions 3.2.0 6d15h 80-ipsec-worker-extensions 3.2.0 6d15h", "oc get mcp master -o yaml | grep 80-ipsec-master-extensions -c", "2", "oc get mcp worker -o yaml | grep 80-ipsec-worker-extensions -c", "2", "kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: <name> spec: nodeSelector: kubernetes.io/hostname: <node_name> desiredState: interfaces: - name: <tunnel_name> type: ipsec state: absent", "oc apply -f remove-ipsec-tunnel.yaml", "oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"ipsecConfig\":{ \"mode\":\"Disabled\" }}}}}'", "from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059", "apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: default-route-policy spec: from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 nextHops: static: - ip: \"172.18.0.8\" - ip: \"172.18.0.9\"", "apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: shadow-traffic-policy spec: from: namespaceSelector: matchLabels: externalTraffic: \"\" nextHops: dynamic: - podSelector: matchLabels: gatewayPod: \"\" namespaceSelector: matchLabels: shadowTraffic: \"\" networkAttachmentName: shadow-gateway - podSelector: matchLabels: gigabyteGW: \"\" namespaceSelector: matchLabels: gatewayNamespace: \"\" networkAttachmentName: gateway", "apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: multi-hop-policy spec: from: namespaceSelector: matchLabels: trafficType: \"egress\" nextHops: static: - ip: \"172.18.0.8\" - ip: \"172.18.0.9\" dynamic: - podSelector: matchLabels: gatewayPod: \"\" namespaceSelector: matchLabels: egressTraffic: \"\" networkAttachmentName: gigabyte", "oc create -f <file>.yaml", "adminpolicybasedexternalroute.k8s.ovn.org/default-route-policy created", "oc describe apbexternalroute <name> | tail -n 6", "Status: Last Transition Time: 2023-04-24T15:09:01Z Messages: Configured external gateway IPs: 172.18.0.8 Status: Success Events: <none>", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6", "ports: - port: <port> 1 protocol: <protocol> 2", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressfirewall.k8s.ovn.org/v1 created", "oc get egressfirewall --all-namespaces", "oc describe egressfirewall <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressfirewall", "oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressfirewall", "oc delete -n <project> egressfirewall <name>", "IP capacity = public cloud default capacity - sum(current IP assignments)", "cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]", "cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]", "ip -details link show", "spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global", "apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4", "namespaceSelector: 1 matchLabels: <label_name>: <label_value>", "podSelector: 1 matchLabels: <label_name>: <label_value>", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081", "oc label nodes <node_name> k8s.ovn.org/egress-assignable=\"\" 1", "apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: \"\" name: <node_name>", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa", "oc apply -f <egressips_name>.yaml 1", "egressips.k8s.ovn.org/<egressips_name> created", "oc label ns <namespace> env=qa 1", "oc get egressip -o yaml", "spec: egressIPs: - 192.168.127.10 - 192.168.127.11", "apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: <egress_service_name> 1 namespace: <namespace> 2 spec: sourceIPBy: <egress_traffic_ip> 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/<role>: \"\" network: <egress_traffic_network> 5", "apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: test-egress-service namespace: test-namespace spec: sourceIPBy: \"LoadBalancerIP\" nodeSelector: matchLabels: vrf: \"true\" network: \"2\"", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example-pool namespace: metallb-system spec: addresses: - 172.19.0.100/32", "oc apply -f ip-addr-pool.yaml", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace annotations: metallb.universe.tf/address-pool: example-pool 1 spec: selector: app: example ports: - name: http protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: example-service namespace: example-namespace spec: sourceIPBy: \"LoadBalancerIP\" 2 nodeSelector: 3 matchLabels: node-role.kubernetes.io/worker: \"\"", "oc apply -f service-egress-service.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example-bgp-adv namespace: metallb-system spec: ipAddressPools: - example-pool nodeSelector: - matchLabels: egress-service.k8s.ovn.org/example-namespace-example-service: \"\" 1", "curl <external_ip_address>:<port_number> 1", "curl <router_service_IP> <port>", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni", "apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> 1 spec: addresses: [ 2 { ip: \"<egress_router>\", 3 gateway: \"<egress_gateway>\" 4 } ] mode: Redirect redirect: { redirectRules: [ 5 { destinationIP: \"<egress_destination>\", port: <egress_router_port>, targetPort: <target_port>, 6 protocol: <network_protocol> 7 }, ], fallbackIP: \"<egress_destination>\" 8 }", "apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: \"Bridge\" } } addresses: [ { ip: \"192.168.12.99/24\", gateway: \"192.168.12.1\" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: \"10.0.0.99\", port: 80, protocol: UDP }, { destinationIP: \"203.0.113.26\", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: \"203.0.113.27\", port: 8443, targetPort: 443, protocol: TCP } ] }", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni 1", "oc get network-attachment-definition egress-router-cni-nad", "NAME AGE egress-router-cni-nad 18m", "oc get deployment egress-router-cni-deployment", "NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m", "oc get pods -l app=egress-router-cni", "NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m", "POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath=\"{.items[0].spec.nodeName}\")", "oc debug node/USDPOD_NODENAME", "chroot /host", "cat /tmp/egress-router-log", "2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to \"net1\" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99", "crictl ps --name egress-router-cni-pod | awk '{print USD1}'", "CONTAINER bac9fae69ddb6", "crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}'", "68857", "nsenter -n -t 68857", "ip route", "default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1", "oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate namespace <namespace> \\ 1 k8s.ovn.org/multicast-enabled-", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056", "spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056", "oc patch network.operator cluster --type merge -p \"USD(cat <file_name>.yaml)\"", "network.operator.openshift.io/cluster patched", "oc get network.operator cluster -o jsonpath=\"{.spec.exportNetworkFlows}\"", "{\"netFlow\":{\"collectors\":[\"192.168.1.99:2056\"]}}", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{\"\\n\"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDpod -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done", "ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"]-", "oc patch network.operator cluster --type='json' -p='[{\"op\":\"remove\", \"path\":\"/spec/exportNetworkFlows\"}]'", "network.operator.openshift.io/cluster patched", "oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"hybridOverlayConfig\":{ \"hybridClusterNetwork\":[ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"hybridOverlayVXLANPort\": <overlay_port> } } } } }'", "network.operator.openshift.io/cluster patched", "oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork.ovnKubernetesConfig}\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/ovn-kubernetes-network-plugin
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS) follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption, refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your vault servers. After you have addressed the above, perform the following steps: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on bare metal . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Compact mode requirements You can install OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. To configure OpenShift Container Platform in compact mode, see the Configuring a three-node cluster section of the Installing guide in OpenShift Container Platform documentation, and Delivering a Three-node Architecture for Edge Deployments . Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preparing_to_deploy_openshift_data_foundation
2.6. Configuring Services on the Real Servers
2.6. Configuring Services on the Real Servers If the real servers are Red Hat Enterprise Linux systems, set the appropriate server daemons to activate at boot time. These daemons can include httpd for Web services or xinetd for FTP or Telnet services. It may also be useful to access the real servers remotely, so the sshd daemon should also be installed and running.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-server-daemons-vsa
Chapter 15. Improving cluster stability in high latency environments using worker latency profiles
Chapter 15. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator need change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are pre-defined with carefully tuned values to control the reaction of the cluster to increased latency. No need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 15.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you control the reaction of the cluster to latency issues without needing to determine the best values using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. Although the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates it's status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds ( node-monitor-grace-period ). The Kubernetes Controller Manager waits 40 seconds for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod on that node has the NoExecute taint, the pod is run according to tolerationSeconds . If the pod has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 15.2. Implementing worker latency profiles at cluster creation Important To edit the configuration of the installer, you will first need to use the command openshift-install create manifests to create the default node manifest as well as other manifest YAML files. This file structure must exist before we can add workerLatencyProfile. The platform on which you are installing may have varying requirements. Refer to the Installing section of the documentation for your specific platform. The workerLatencyProfile must be added to the manifest in the following sequence: Create the manifest needed to build the cluster, using a folder name appropriate for your installation. Create a YAML file to define config.node . The file must be in the manifests directory. When defining workerLatencyProfile in the manifest for the first time, specify any of the profiles at cluster creation time: Default , MediumUpdateAverageReaction or LowUpdateSlowReaction . Verification Here is an example manifest creation showing the spec.workerLatencyProfile Default value in the manifest file: USD openshift-install create manifests --dir=<cluster-install-dir> Edit the manifest and add the value. In this example we use vi to show an example manifest file with the "Default" workerLatencyProfile value added: USD vi <cluster-install-dir>/manifests/config-node-default-profile.yaml Example output apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: "Default" 15.3. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. 15.4. Example steps for displaying resulting values of workerLatencyProfile You can display the values in the workerLatencyProfile with the following commands. Verification Check the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds fields output by the Kube API Server: USD oc get KubeAPIServer -o yaml | grep -A 1 default- Example output default-not-ready-toleration-seconds: - "300" default-unreachable-toleration-seconds: - "300" Check the values of the node-monitor-grace-period field from the Kube Controller Manager: USD oc get KubeControllerManager -o yaml | grep -A 1 node-monitor Example output node-monitor-grace-period: - 40s Check the nodeStatusUpdateFrequency value from the Kubelet. Set the directory /host as the root directory within the debug shell. By changing the root directory to /host , you can run binaries contained in the host's executable paths: USD oc debug node/<worker-node-name> USD chroot /host # cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency Example output "nodeStatusUpdateFrequency": "10s" These outputs validate the set of timing variables for the Worker Latency Profile.
[ "openshift-install create manifests --dir=<cluster-install-dir>", "vi <cluster-install-dir>/manifests/config-node-default-profile.yaml", "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: \"Default\"", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get KubeAPIServer -o yaml | grep -A 1 default-", "default-not-ready-toleration-seconds: - \"300\" default-unreachable-toleration-seconds: - \"300\"", "oc get KubeControllerManager -o yaml | grep -A 1 node-monitor", "node-monitor-grace-period: - 40s", "oc debug node/<worker-node-name> chroot /host cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency", "\"nodeStatusUpdateFrequency\": \"10s\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/scaling-worker-latency-profiles
Chapter 11. Using Streams for Apache Kafka with MirrorMaker 2
Chapter 11. Using Streams for Apache Kafka with MirrorMaker 2 Use MirrorMaker 2 to replicate data between two or more active Kafka clusters, within or across data centers. To configure MirrorMaker 2, edit the config/connect-mirror-maker.properties configuration file. If required, you can enable distributed tracing for MirrorMaker 2 . Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Note MirrorMaker 2 has features not supported by the version of MirrorMaker. However, you can configure MirrorMaker 2 to be used in legacy mode . 11.1. Configuring active/active or active/passive modes You can use MirrorMaker 2 in active/passive or active/active cluster configurations. active/active cluster configuration An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster. active/passive cluster configuration An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination. 11.1.1. Bidirectional replication (active/active) The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 11.1. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 11.1.2. Unidirectional replication (active/passive) The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. 11.2. Configuring MirrorMaker 2 connectors Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters. MirrorMaker 2 consists of the following connectors: MirrorSourceConnector The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run. MirrorCheckpointConnector The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster. MirrorHeartbeatConnector The heartbeat connector periodically checks connectivity between the source and target cluster. The following table describes connector properties and the connectors you configure to use them. Table 11.1. MirrorMaker 2 connector configuration properties Property sourceConnector checkpointConnector heartbeatConnector admin.timeout.ms Timeout for admin tasks, such as detecting new topics. Default is 60000 (1 minute). [✓] [✓] [✓] replication.policy.class Policy to define the remote topic naming convention. Default is org.apache.kafka.connect.mirror.DefaultReplicationPolicy . [✓] [✓] [✓] replication.policy.separator The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.). Separator configuration is only applicable to the DefaultReplicationPolicy replication policy class, which defines remote topic names. The IdentityReplicationPolicy class does not use the property as topics retain their original names. [✓] [✓] [✓] consumer.poll.timeout.ms Timeout when polling the source cluster. Default is 1000 (1 second). [✓] [✓] offset-syncs.topic.location The location of the offset-syncs topic, which can be the source (default) or target cluster. [✓] [✓] topic.filter.class Topic filter to select the topics to replicate. Default is org.apache.kafka.connect.mirror.DefaultTopicFilter . [✓] [✓] config.property.filter.class Topic filter to select the topic configuration properties to replicate. Default is org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter . [✓] config.properties.exclude Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions. [✓] offset.lag.max Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is 100 . [✓] offset-syncs.topic.replication.factor Replication factor for the internal offset-syncs topic. Default is 3 . [✓] refresh.topics.enabled Enables check for new topics and partitions. Default is true . [✓] refresh.topics.interval.seconds Frequency of topic refresh. Default is 600 (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes. You can change the frequency by adding refresh.topics.interval.seconds to the source connector configuration. [✓] replication.factor The replication factor for new topics. Default is 2 . [✓] sync.topic.acls.enabled Enables synchronization of ACLs from the source cluster. Default is true . For more information, see Section 11.5, "ACL rules synchronization" . [✓] sync.topic.acls.interval.seconds Frequency of ACL synchronization. Default is 600 (10 minutes). [✓] sync.topic.configs.enabled Enables synchronization of topic configuration from the source cluster. Default is true . [✓] sync.topic.configs.interval.seconds Frequency of topic configuration synchronization. Default 600 (10 minutes). [✓] checkpoints.topic.replication.factor Replication factor for the internal checkpoints topic. Default is 3 . [✓] emit.checkpoints.enabled Enables synchronization of consumer offsets to the target cluster. Default is true . [✓] emit.checkpoints.interval.seconds Frequency of consumer offset synchronization. Default is 60 (1 minute). [✓] group.filter.class Group filter to select the consumer groups to replicate. Default is org.apache.kafka.connect.mirror.DefaultGroupFilter . [✓] refresh.groups.enabled Enables check for new consumer groups. Default is true . [✓] refresh.groups.interval.seconds Frequency of consumer group refresh. Default is 600 (10 minutes). [✓] sync.group.offsets.enabled Enables synchronization of consumer group offsets to the target cluster __consumer_offsets topic. Default is false . [✓] sync.group.offsets.interval.seconds Frequency of consumer group offset synchronization. Default is 60 (1 minute). [✓] emit.heartbeats.enabled Enables connectivity checks on the target cluster. Default is true . [✓] emit.heartbeats.interval.seconds Frequency of connectivity checks. Default is 1 (1 second). [✓] heartbeats.topic.replication.factor Replication factor for the internal heartbeats topic. Default is 3 . [✓] 11.2.1. Changing the location of the consumer group offsets topic MirrorMaker 2 tracks offsets for consumer groups using internal topics. offset-syncs topic The offset-syncs topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints topic The checkpoints topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. As they are used internally by MirrorMaker 2, you do not interact directly with these topics. MirrorCheckpointConnector emits checkpoints for offset tracking. Offsets for the checkpoints topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. The location of the offset-syncs topic is the source cluster by default. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster. 11.2.2. Synchronizing consumer group offsets The __consumer_offsets topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster. Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true . Synchronization is disabled by default. When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics. Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID error is returned. If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Note If you have an application written in Java, you can use the RemoteClusterUtils.java utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints topic. 11.2.3. Deciding when to use the heartbeat connector The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat topic is located on the target cluster, which allows it to do the following: Identify all source clusters it is mirroring data from Verify the liveness and latency of the mirroring process This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration. 11.2.4. Aligning the configuration of MirrorMaker 2 connectors To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors: replication.policy.class replication.policy.separator offset-syncs.topic.location topic.filter.class For example, the value for replication.policy.class must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings. 11.2.5. Listing the offsets of MirrorMaker 2 connectors To list the offset positions of the internal MirrorMaker 2 connectors, use the GET /connectors/{connector}/offsets request with the Kafka Connect API. For more information on the Kafka Connect REST API endpoints, see Section 10.3.2, "Configuring connectors" . 11.3. Connector producer and consumer configuration MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings. Important Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change. Producer and consumer configuration applies to all connectors. You specify the configuration in the config/connect-mirror-maker.properties file. Use the properties file to override any default configuration for the producers and consumers in the following format: <source_cluster_name> .consumer. <property> <source_cluster_name> .producer. <property> <target_cluster_name> .consumer. <property> <target_cluster_name> .producer. <property> The following example shows how you configure the producers and consumers. Though the properties are set for all connectors, some configuration properties are only relevant to certain connectors. Example configuration for connector producers and consumers clusters=cluster-1,cluster-2 # ... cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000 11.4. Specifying a maximum number of tasks Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups. Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don't need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks. You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasks.max property. Without specifying a maximum number of tasks, the default setting is a single task. The heartbeat connector always uses a single task. The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasks.max . For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups. tasks.max configuration for MirrorMaker connectors clusters=cluster-1,cluster-2 # ... tasks.max = 10 By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance. 11.5. ACL rules synchronization If AclAuthorizer is being used, ACL rules that manage access to brokers also apply to remote topics. Users that can read a source topic can read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 11.6. Running MirrorMaker 2 in dedicated mode Use MirrorMaker 2 to synchronize data between Kafka clusters through configuration. This procedure shows how to configure and run a dedicated single-node MirrorMaker 2 cluster. Dedicated clusters use Kafka Connect worker nodes to mirror data between Kafka clusters. Note It is also possible to run MirrorMaker 2 in distributed mode. MirrorMaker 2 operates as connectors in both dedicated and distributed modes. When running a dedicated MirrorMaker cluster, connectors are configured in the Kafka Connect cluster. As a consequence, this allows direct access to the Kafka Connect cluster, the running of additional connectors, and use of the REST API. For more information, refer to the Apache Kafka documentation . The version of MirrorMaker continues to be supported, by running MirrorMaker 2 in legacy mode . The configuration must specify: Each Kafka cluster Connection information for each cluster, including TLS authentication The replication flow and direction Cluster to cluster Topic to topic Replication rules Committed offset tracking intervals This procedure describes how to implement MirrorMaker 2 by creating the configuration in a properties file, then passing the properties when using the MirrorMaker script file to set up the connections. You can specify the topics and consumer groups you wish to replicate from a source cluster. You specify the names of the source and target clusters, then specify the topics and consumer groups to replicate. In the following example, topics and consumer groups are specified for replication from cluster 1 to 2. Example configuration to replicate specific topics and consumer groups clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2 You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set these properties. You can also replicate all topics and consumer groups by using .* as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster. Before you begin A sample configuration properties file is provided in ./config/connect-mirror-maker.properties . Prerequisites You need Streams for Apache Kafka installed on the hosts of each Kafka cluster node you are replicating. Procedure Open the sample properties file in a text editor, or create a new one, and edit the file to include connection information and the replication flows for each Kafka cluster. The following example shows a configuration to connect two clusters, cluster-1 and cluster-2 , bidirectionally. Cluster names are configurable through the clusters property. Example MirrorMaker 2 configuration clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15 1 Each Kafka cluster is identified with its alias. 2 Connection information for cluster-1 , using the bootstrap address and port 443 . Both clusters use port 443 to connect to Kafka using OpenShift Routes . 3 The ssl. properties define TLS configuration for cluster-1 . 4 Connection information for cluster-2 . 5 The ssl. properties define the TLS configuration for cluster-2 . 6 Replication flow enabled from cluster-1 to cluster-2 . 7 Replication flow enabled from cluster-2 to cluster-1 . 8 Replication of all topics from cluster-1 to cluster-2 . The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. 9 Replication of specific topics from cluster-2 to cluster-1 . 10 Replication of all consumer groups from cluster-1 to cluster-2 . The checkpoint connector replicates the specified consumer groups. 11 Replication of specific consumer groups from cluster-2 to cluster-1 . 12 Defines the separator used for the renaming of remote topics. 13 When enabled, ACLs are applied to synchronized topics. The default is false . 14 The period between checks for new topics to synchronize. 15 The period between checks for new consumer groups to synchronize. OPTION: If required, add a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is used for active/passive backups and data migration. replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy OPTION: If you want to synchronize consumer group offsets, add configuration to enable and manage the synchronization: refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3 1 Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default. 2 If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. 3 Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks. Start ZooKeeper and Kafka in the target clusters: ./bin/zookeeper-server-start.sh -daemon \ ./config/zookeeper.properties ./bin/kafka-server-start.sh -daemon \ ./config/server.properties Start MirrorMaker with the cluster connection configuration and replication policies you defined in your properties file: ./bin/connect-mirror-maker.sh \ ./config/connect-mirror-maker.properties MirrorMaker sets up connections between the clusters. For each target cluster, verify that the topics are being replicated: ./bin/kafka-topics.sh --bootstrap-server <broker_address> --list 11.7. (Deprecated) Using MirrorMaker 2 in legacy mode This procedure describes how to configure MirrorMaker 2 to use it in legacy mode. Legacy mode supports the version of MirrorMaker. The MirrorMaker script ./bin/kafka-mirror-maker.sh can run MirrorMaker 2 in legacy mode. Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. Kafka MirrorMaker 1 will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use MirrorMaker 2 with the IdentityReplicationPolicy . Prerequisites You need the properties files you currently use with the legacy version of MirrorMaker. ./config/consumer.properties ./config/producer.properties Procedure Edit the MirrorMaker consumer.properties and producer.properties files to turn off MirrorMaker 2 features. For example: replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false 1 Emulate the version of MirrorMaker. 2 MirrorMaker 2 features disabled, including the internal checkpoint and heartbeat topics Save the changes and restart MirrorMaker with the properties files you used with the version of MirrorMaker: ./bin/kafka-mirror-maker.sh \ --consumer.config ./config/consumer.properties \ --producer.config ./config/producer.properties \ --num.streams=2 The consumer properties provide the configuration for the source cluster and the producer properties provide the target cluster configuration. MirrorMaker sets up connections between the clusters. Start ZooKeeper and Kafka in the target cluster: ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties ./bin/kafka-server-start.sh -daemon ./config/server.properties For the target cluster, verify that the topics are being replicated: ./bin/kafka-topics.sh --bootstrap-server <broker_address> --list
[ "clusters=cluster-1,cluster-2 cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000", "clusters=cluster-1,cluster-2 tasks.max = 10", "clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2", "clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15", "replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy", "refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3", "./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties", "./bin/kafka-server-start.sh -daemon ./config/server.properties", "./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties", "./bin/kafka-topics.sh --bootstrap-server <broker_address> --list", "replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false", "./bin/kafka-mirror-maker.sh --consumer.config ./config/consumer.properties --producer.config ./config/producer.properties --num.streams=2", "./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties", "./bin/kafka-server-start.sh -daemon ./config/server.properties", "./bin/kafka-topics.sh --bootstrap-server <broker_address> --list" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-mirrormaker-str
Chapter 4. Creating images
Chapter 4. Creating images Learn how to create your own container images, based on pre-built images that are ready to help you. The process includes learning best practices for writing images, defining metadata for images, testing images, and using a custom builder workflow to create images to use with OpenShift Container Platform. After you create an image, you can push it to the OpenShift image registry. 4.1. Learning container best practices When creating container images to run on OpenShift Container Platform there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on OpenShift Container Platform. 4.1.1. General container image guidelines The following guidelines apply when creating a container image in general, and are independent of whether the images are used on OpenShift Container Platform. Reuse images Wherever possible, base your image on an appropriate upstream image using the FROM statement. This ensures your image can easily pick up security fixes from an upstream image when it is updated, rather than you having to update your dependencies directly. In addition, use tags in the FROM instruction, for example, rhel:rhel7 , to make it clear to users exactly which version of an image your image is based on. Using a tag other than latest ensures your image is not subjected to breaking changes that might go into the latest version of an upstream image. Maintain compatibility within tags When tagging your own images, try to maintain backwards compatibility within a tag. For example, if you provide an image named image and it currently includes version 1.0 , you might provide a tag of image:v1 . When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image image:v1 , and downstream consumers of this tag are able to get updates without being broken. If you later release an incompatible update, then switch to a new tag, for example image:v2 . This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using image:latest takes on the risk of any incompatible changes being introduced. Avoid multiple processes Do not start multiple services, such as a database and SSHD , inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. OpenShift Container Platform allows you to easily colocate and co-manage related images by grouping them into a single pod. This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes. Use exec in wrapper scripts Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses exec so that the script's process is replaced by your software. If you do not use exec , then signals sent by your container runtime go to your wrapper script instead of your software's process. This is not what you want. If you have a wrapper script that starts a process for some server. You start your container, for example, using podman run -i , which runs the wrapper script, which in turn starts your process. If you want to close your container with CTRL+C . If your wrapper script used exec to start the server process, podman sends SIGINT to the server process, and everything works as you expect. If you did not use exec in your wrapper script, podman sends SIGINT to the process for the wrapper script and your process keeps running like nothing happened. Also note that your process runs as PID 1 when running in a container. This means that if your main process terminates, the entire container is stopped, canceling any child processes you launched from your PID 1 process. Clean temporary files Remove all temporary files you create during the build process. This also includes any files added with the ADD command. For example, run the yum clean command after performing yum install operations. You can prevent the yum cache from ending up in an image layer by creating your RUN statement as follows: RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y Note that if you instead write: RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y Then the first yum invocation leaves extra files in that layer, and these files cannot be removed when the yum clean operation is run later. The extra files are not visible in the final image, but they are present in the underlying layers. The current container build process does not allow a command run in a later layer to shrink the space used by the image when something was removed in an earlier layer. However, this may change in the future. This means that if you perform an rm command in a later layer, although the files are hidden it does not reduce the overall size of the image to be downloaded. Therefore, as with the yum clean example, it is best to remove files in the same command that created them, where possible, so they do not end up written to a layer. In addition, performing multiple commands in a single RUN statement reduces the number of layers in your image, which improves download and extraction time. Place instructions in the proper order The container builder reads the Dockerfile and runs the instructions from top to bottom. Every instruction that is successfully executed creates a layer which can be reused the time this or another image is built. It is very important to place instructions that rarely change at the top of your Dockerfile . Doing so ensures the builds of the same image are very fast because the cache is not invalidated by upper layer changes. For example, if you are working on a Dockerfile that contains an ADD command to install a file you are iterating on, and a RUN command to yum install a package, it is best to put the ADD command last: FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile This way each time you edit myfile and rerun podman build or docker build , the system reuses the cached layer for the yum command and only generates the new layer for the ADD operation. If instead you wrote the Dockerfile as: FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y Then each time you changed myfile and reran podman build or docker build , the ADD operation would invalidate the RUN layer cache, so the yum operation must be rerun as well. Mark important ports The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a podman run invocation, using the EXPOSE instruction in a Dockerfile makes it easier for both humans and software to use your image by explicitly declaring the ports your software needs to run: Exposed ports show up under podman ps associated with containers created from your image. Exposed ports are present in the metadata for your image returned by podman inspect . Exposed ports are linked when you link one container to another. Set environment variables It is good practice to set environment variables with the ENV instruction. One example is to set the version of your project. This makes it easy for people to find the version without looking at the Dockerfile . Another example is advertising a path on the system that could be used by another process, such as JAVA_HOME . Avoid default passwords Avoid setting default passwords. Many people extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords are configurable using an environment variable instead. If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set. Avoid sshd It is best to avoid running sshd in your image. You can use the podman exec or docker exec command to access containers that are running on the local host. Alternatively, you can use the oc exec command or the oc rsh command to access containers that are running on the OpenShift Container Platform cluster. Installing and running sshd in your image opens up additional vectors for attack and requirements for security patching. Use volumes for persistent data Images use a volume for persistent data. This way OpenShift Container Platform mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content could not be preserved. All data that needs to be preserved even after the container is destroyed must be written to a volume. Container engines support a readonly flag for containers, which can be used to strictly enforce good practices about not writing data to ephemeral storage in a container. Designing your image around that capability now makes it easier to take advantage of it later. Explicitly defining volumes in your Dockerfile makes it easy for consumers of the image to understand what volumes they must define when running your image. See the Kubernetes documentation for more information on how volumes are used in OpenShift Container Platform. Note Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster. 4.1.2. OpenShift Container Platform-specific guidelines The following are guidelines that apply when creating container images specifically for use on OpenShift Container Platform. 4.1.2.1. Enable images for source-to-image (S2I) For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the Source-to-Image (S2I) build tool. S2I is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. 4.1.2.2. Support arbitrary user ids By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node. For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions. Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image: RUN chgrp -R 0 /some/directory && \ chmod -R g=u /some/directory Because the container user is always a member of the root group, the container user can read and write these files. Warning Care must be taken when altering the directories and file permissions of the sensitive areas of a container. If applied to sensitive areas, such as the /etc/passwd file, such changes can allow the modification of these files by unintended users, potentially exposing the container or host. CRI-O supports the insertion of arbitrary user IDs into a container's /etc/passwd file. As such, changing permissions is never required. Additionally, the /etc/passwd file should not exist in any container image. If it does, the CRI-O container runtime will fail to inject a random UID into the /etc/passwd file. In such cases, the container might face challenges in resolving the active UID. Failing to meet this requirement could impact the functionality of certain containerized applications. In addition, the processes running in the container must not listen on privileged ports, ports below 1024, since they are not running as a privileged user. Important If your S2I image does not include a USER declaration with a numeric user, your builds fail by default. To allow images that use either named users or the root 0 user to build in OpenShift Container Platform, you can add the project's builder service account, system:serviceaccount:<your-project>:builder , to the anyuid security context constraint (SCC). Alternatively, you can allow all images to run as any user. 4.1.2.3. Use services for inter-image communication For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image consumes an OpenShift Container Platform service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests. 4.1.2.4. Provide common libraries For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met. 4.1.2.5. Use environment variables for configuration Users of your image are able to configure it without having to create a downstream image based on your image. This means that the runtime configuration is handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file. It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry. Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image. For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present. This topic is related to the Using Services for Inter-image Communication topic in that configuration like datasources are defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the OpenShift Container Platform environment without modifying the application image. In addition, tuning is done by inspecting the cgroups settings for the container. This allows the image to tune itself to the available memory, CPU, and other resources. For example, Java-based images tune their heap based on the cgroup maximum memory parameter to ensure they do not exceed the limits and get an out-of-memory error. 4.1.2.6. Set image metadata Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that are needed. 4.1.2.7. Clustering You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks must share information to perform leader election or failover state; for example, in session replication. Consider how your instances accomplish this communication when running in OpenShift Container Platform. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic. 4.1.2.8. Logging It is best to send all logging to standard out. OpenShift Container Platform collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you must separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages. If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file. 4.1.2.9. Liveness and readiness probes Document example liveness and readiness probes that can be used with your image. These probes allow users to deploy your image with confidence that traffic is not be routed to the container until it is prepared to handle it, and that the container is restarted if the process gets into an unhealthy state. 4.1.2.10. Templates Consider providing an example template with your image. A template gives users an easy way to quickly get your image deployed with a working configuration. Your template must include the liveness and readiness probes you documented with the image, for completeness. 4.2. Including metadata in images Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed. This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future. 4.2.1. Defining image metadata You can use the LABEL instruction in a Dockerfile to define image metadata. Labels are similar to environment variables in that they are key value pairs attached to an image or a container. Labels are different from environment variable in that they are not visible to the running application and they can also be used for fast look-up of images and containers. Docker documentation for more information on the LABEL instruction. The label names are typically namespaced. The namespace is set accordingly to reflect the project that is going to pick up the labels and use them. For OpenShift Container Platform the namespace is set to io.openshift and for Kubernetes the namespace is io.k8s . See the Docker custom metadata documentation for details about the format. Table 4.1. Supported Metadata Variable Description io.openshift.tags This label contains a list of tags represented as a list of comma-separated string values. The tags are the way to categorize the container images into broad areas of functionality. Tags help UI and generation tools to suggest relevant container images during the application creation process. io.openshift.wants Specifies a list of tags that the generation tools and the UI uses to provide relevant suggestions if you do not have the container images with specified tags already. For example, if the container image wants mysql and redis and you do not have the container image with redis tag, then UI can suggest you to add this image into your deployment. io.k8s.description This label can be used to give the container image consumers more detailed information about the service or functionality this image provides. The UI can then use this description together with the container image name to provide more human friendly information to end users. io.openshift.non-scalable An image can use this variable to suggest that it does not support scaling. The UI then communicates this to consumers of that image. Being not-scalable means that the value of replicas should initially not be set higher than 1 . io.openshift.min-memory and io.openshift.min-cpu This label suggests how much resources the container image needs to work properly. The UI can warn the user that deploying this container image may exceed their user quota. The values must be compatible with Kubernetes quantity. 4.3. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 4.3.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 4.3.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 4.2. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 4.4. About testing source-to-image images As an Source-to-Image (S2I) builder image author, you can test your S2I image locally and use the OpenShift Container Platform build system for automated testing and continuous integration. S2I requires the assemble and run scripts to be present to successfully run the S2I build. Providing the save-artifacts script reuses the build artifacts, and providing the usage script ensures that usage information is printed to console when someone runs the container image outside of the S2I. The goal of testing an S2I image is to make sure that all of these described commands work properly, even if the base container image has changed or the tooling used by the commands was updated. 4.4.1. Understanding testing requirements The standard location for the test script is test/run . This script is invoked by the OpenShift Container Platform S2I image builder and it could be a simple Bash script or a static Go binary. The test/run script performs the S2I build, so you must have the S2I binary available in your USDPATH . If required, follow the installation instructions in the S2I README . S2I combines the application source code and builder image, so to test it you need a sample application source to verify that the source successfully transforms into a runnable container image. The sample application should be simple, but it should exercise the crucial steps of assemble and run scripts. 4.4.2. Generating scripts and tools The S2I tooling comes with powerful generation tools to speed up the process of creating a new S2I image. The s2i create command produces all the necessary S2I scripts and testing tools along with the Makefile : USD s2i create <image_name> <destination_directory> The generated test/run script must be adjusted to be useful, but it provides a good starting point to begin developing. Note The test/run script produced by the s2i create command requires that the sample application sources are inside the test/test-app directory. 4.4.3. Testing locally The easiest way to run the S2I image tests locally is to use the generated Makefile . If you did not use the s2i create command, you can copy the following Makefile template and replace the IMAGE_NAME parameter with your image name. Sample Makefile 4.4.4. Basic testing workflow The test script assumes you have already built the image you want to test. If required, first build the S2I image. Run one of the following commands: If you use Podman, run the following command: USD podman build -t <builder_image_name> If you use Docker, run the following command: USD docker build -t <builder_image_name> The following steps describe the default workflow to test S2I image builders: Verify the usage script is working: If you use Podman, run the following command: USD podman run <builder_image_name> . If you use Docker, run the following command: USD docker run <builder_image_name> . Build the image: USD s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_ Optional: if you support save-artifacts , run step 2 once again to verify that saving and restoring artifacts works properly. Run the container: If you use Podman, run the following command: USD podman run <output_application_image_name> If you use Docker, run the following command: USD docker run <output_application_image_name> Verify the container is running and the application is responding. Running these steps is generally enough to tell if the builder image is working as expected. 4.4.5. Using OpenShift Container Platform for building the image Once you have a Dockerfile and the other artifacts that make up your new S2I builder image, you can put them in a git repository and use OpenShift Container Platform to build and push the image. Define a Docker build that points to your repository. If your OpenShift Container Platform instance is hosted on a public IP address, the build can be triggered each time you push into your S2I builder image GitHub repository. You can also use the ImageChangeTrigger to trigger a rebuild of your applications that are based on the S2I builder image you updated.
[ "RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y", "RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y", "FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile", "FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y", "RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory", "LABEL io.openshift.tags mongodb,mongodb24,nosql", "LABEL io.openshift.wants mongodb,redis", "LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support", "LABEL io.openshift.non-scalable true", "LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "s2i create <image_name> <destination_directory>", "IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run", "podman build -t <builder_image_name>", "docker build -t <builder_image_name>", "podman run <builder_image_name> .", "docker run <builder_image_name> .", "s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_", "podman run <output_application_image_name>", "docker run <output_application_image_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/images/creating-images
Chapter 35. Storage
Chapter 35. Storage mpathpersist no longer fails when opening too many files Previously, the mpathpersist utility sometimes overstepped the limit on open files when scanning a large number of devices. As a consequence, mpathpersist terminated unexpectedly. With this update, mpathpersist now checks the max_fds configuration value and correctly sets the maximum number of open files. As a result, mpathpersist no longer fails when opening too many files. (BZ#1610263) The multipathd readsector0 checker now returns the correct result Previously, in some cases the multipathd daemon was incorrectly calculating the I/O size to use with the readsector0 checker, causing it to do a 0 size read. This could cause the multipathd readsector0 checker to return the wrong result. It is also possible that some SCSI devices do not treat a 0 size read command as valid. With this fix, multipathd now uses the correct size for the readsector0 checker. (BZ# 1584228 ) DM Multipath is much less likely to output an incorrect timeout error Previously, Device Mapper Multipath (DM Multipath) printed an error message after reconfiguring devices for more than 10 seconds. The error was displayed even if the reconfigure was progressing successfully. As a consequence, it sometimes seemed that the reconfigure failed when reconfiguring a large number of devices. With this update, the timeout limit has been increased from 10 to 60 seconds, and DM Multipath is now much less likely to print incorrect timeout errors when reconfiguring a large number of devices. (BZ# 1544958 ) multipath now correctly prints the sysfs state of paths Previously, the multipath -l command did not print the sysfs state of paths because the multipath utility did not correctly set path information. With this update, the problem has been fixed, and multipath now prints the sysfs state of paths correctly. (BZ#1526876) multipathd can now correctly set APTPL when registering keys on path devices Previously, the multipathd service did not track which devices registered their persistent reservation keys with the Activate Persist Through Power Loss (APTPL) option. As a consequence, registrations always lost the APTPL setting. With this update, the problem has been fixed: If you set the reservation_key option to a file in the multipath.conf configuration file, multipathd now keeps the APTPL setting automatically. If you set reservation_key to a specific key, you can now add the :aptpl string at the end of the key in reservation_key , which enables APTPL for it. Set this to match the APTPL setting used when registering the key. (BZ# 1498724 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/bug_fixes_storage
Chapter 4. Registering hosts and setting up host integration
Chapter 4. Registering hosts and setting up host integration You must register hosts that have not been provisioned through Satellite to be able to manage them with Satellite. You can register hosts through Satellite Server or Capsule Server. You must also install and configure tools on your hosts, depending on which integration features you want to use. Use the following procedures to install and configure host tools: Section 4.5, "Installing Tracer" Section 4.6, "Installing and configuring Puppet agent during host registration" Section 4.7, "Installing and configuring Puppet agent manually" 4.1. Supported clients in registration Satellite supports the following operating systems and architectures for registration. Supported host operating systems The hosts can use the following operating systems: Red Hat Enterprise Linux 9 and 8 Red Hat Enterprise Linux 7 and 6 with the ELS Add-On You can register the following hosts for converting to RHEL: CentOS Linux 7 Oracle Linux 7 and 8 Supported host architectures The hosts can use the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z architectures 4.2. Registration methods You can use the following methods to register hosts to Satellite: Global registration You generate a curl command from Satellite and run this command from an unlimited number of hosts to register them using provisioning templates over the Satellite API. For more information, see Section 4.3, "Registering hosts by using global registration" . By using this method, you can also deploy Satellite SSH keys to hosts during registration to Satellite to enable hosts for remote execution jobs. For more information, see Chapter 13, Configuring and setting up remote jobs . By using this method, you can also configure hosts with Red Hat Insights during registration to Satellite. For more information, see Chapter 10, Monitoring hosts by using Red Hat Insights . (Deprecated) Katello CA Consumer You download and install the consumer RPM from satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm on the host and then run subscription-manager . (Deprecated) Bootstrap script You download the bootstrap script from satellite.example.com /pub/bootstrap.py on the host and then run the script. For more information, see Section 4.4, "Registering hosts by using the bootstrap script" . 4.3. Registering hosts by using global registration You can register a host to Satellite by generating a curl command on Satellite and running this command on hosts. This method uses two provisioning templates: Global Registration template and Linux host_init_config default template. That gives you complete control over the host registration process. You can also customize the default templates if you need greater flexibility. For more information, see Section 4.3.5, "Customizing the registration templates" . 4.3.1. Global parameters for registration You can configure the following global parameters by navigating to Configure > Global Parameters : The host_registration_insights parameter is used in the insights snippet. If the parameter is set to true , the registration installs and enables the Red Hat Insights client on the host. If the parameter is set to false , it prevents Satellite and the Red Hat Insights client from uploading Inventory reports to your Red Hat Hybrid Cloud Console. The default value is true . When overriding the parameter value, set the parameter type to boolean . The host_packages parameter is for installing packages on the host. The host_registration_remote_execution parameter is used in the remote_execution_ssh_keys snippet. If it is set to true , the registration enables remote execution on the host. The default value is true . The remote_execution_ssh_keys , remote_execution_ssh_user , remote_execution_create_user , and remote_execution_effective_user_method parameters are used in the remote_execution_ssh_keys snippet. For more details, see the snippet. You can navigate to snippets in the Satellite web UI through Hosts > Templates > Provisioning Templates . 4.3.2. Configuring a host for registration Configure your host for registration to Satellite Server or Capsule Server. You can use a configuration management tool to configure multiple hosts at once. Prerequisites The host must be using a supported operating system. For more information, see Section 4.1, "Supported clients in registration" . The system clock on your Satellite Server and any Capsule Servers must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail. For example, you can use the Chrony suite for timekeeping. Procedure Enable and start a time-synchronization tool on your host. The host must be synchronized with the same NTP server as Satellite Server and any Capsule Servers. On Red Hat Enterprise Linux 7 and later: On Red Hat Enterprise Linux 6: Deploy the SSL CA file on your host so that the host can make a secured registration call. Find where Satellite stores the SSL CA file by navigating to Administer > Settings > Authentication and locating the value of the SSL CA file setting. Transfer the SSL CA file to your host securely, for example by using scp . Login to your host by using SSH. Copy the certificate to the truststore: Update the truststore: 4.3.3. Registering a host You can register a host by using registration templates and set up various integration features and host tools during the registration process. Prerequisites Your Satellite account has the Register hosts role assigned or a role with equivalent permissions. You must have root privileges on the host that you want to register. You have configured the host for registration. For more information, see Section 4.3.2, "Configuring a host for registration" . An activation key must be available for the host. For more information, see Managing Activation Keys in Managing content . Optional: If you want to register hosts to Red Hat Insights, you must synchronize the rhel-8-for-x86_64-baseos-rpms and rhel-8-for-x86_64-appstream-rpms repositories and make them available in the activation key that you use. This is required to install the insights-client package on hosts. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . This repository is required for the remote execution pull client, Puppet agent, Tracer, and other tools. If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . If your Satellite Server or Capsule Server is behind an HTTP proxy, configure the Subscription Manager on your host to use the HTTP proxy for connection. For more information, see How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy in the Red Hat Knowledgebase . Procedure In the Satellite web UI, navigate to Hosts > Register Host . Enter the details for how you want the registered hosts to be configured. On the General tab, in the Activation Keys field, enter one or more activation keys to assign to hosts. Click Generate to generate a curl command. Run the curl command as root on the host that you want to register. After registration completes, any Ansible roles assigned to a host group you specified when configuring the registration template will run on the host. The registration details that you can specify include the following: On the General tab, in the Capsule field, you can select the Capsule to register hosts through. A Capsule behind a load balancer takes precedence over a Capsule selected in the Satellite web UI as the content source of the host. On the General tab, you can select the Insecure option to make the first call insecure. During this first call, the host downloads the CA file from Satellite. The host will use this CA file to connect to Satellite with all future calls making them secure. Red Hat recommends that you avoid insecure calls. If an attacker, located in the network between Satellite and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key. On the Advanced tab, in the Repositories field, you can list repositories to be added before the registration is performed. You do not have to specify repositories if you provide them in an activation key. On the Advanced tab, in the Token lifetime (hours) field, you can change the validity duration of the JSON Web Token (JWT) that Satellite uses for authentication. The duration of this token defines how long the generated curl command works. Note that Satellite applies the permissions of the user who generates the curl command to authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration. The scope of the JWTs is limited to the registration endpoints only and cannot be used anywhere else. CLI procedure Use the hammer host-registration generate-command to generate the curl command to register the host. On the host that you want to register, run the curl command as root . For more information, see the Hammer CLI help with hammer host-registration generate-command --help . Ansible procedure Use the redhat.satellite.registration_command module. For more information, see the Ansible module documentation with ansible-doc redhat.satellite.registration_command . API procedure Use the POST /api/registration_commands resource. For more information, see the full API reference at https://satellite.example.com/apidoc/v2.html . 4.3.4. Customizing host registration by using snippets You can customize the registration process by creating snippets with pre-defined names. The Global Registration template includes these snippets automatically. Therefore, you do not have to edit the template. To add custom steps to registration, create one or both of the following snippets: before_registration This snippet is loaded and executed by the Global Registration template before registering your host to Satellite. after_registration This snippet is loaded and executed by the Global Registration template after registering your host to Satellite. Ensure you name the snippets precisely. Otherwise, the Global Registration template cannot load them. Prerequisites Your Satellite account has a role that grants the permissions view_provisioning_templates , create_provisioning_templates , assign_organizations , and assign_locations . You have selected a particular organization and location context. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Click Create Template . In the Name field, enter the name of the required snippet: before_registration or after_registration . In the template editor, create your snippet. On the Type tab, select Snippet . On the Locations tab, assign the snippet to required locations. On the Organizations tab, assign the snippet to required organizations. Click Submit . Additional resources Appendix B, Template writing reference 4.3.5. Customizing the registration templates You can customize the registration process by editing the provisioning templates. Note that all default templates in Satellite are locked. If you want to customize the registration templates, you must clone the default templates and edit the clones. Note Red Hat only provides support for the original unedited templates. Customized templates do not receive updates released by Red Hat. The registration process uses the following provisioning templates: The Global Registration template contains steps for registering hosts to Satellite. This template renders when hosts access the /register Satellite API endpoint. The Linux host_init_config default template contains steps for initial configuration of hosts after they are registered. Procedure Navigate to Hosts > Templates > Provisioning Templates . Search for the template you want to edit. In the row of the required template, click Clone . Edit the template as needed. For more information, see Appendix B, Template writing reference . Click Submit . Navigate to Administer > Settings > Provisioning . Change the following settings as needed: Point the Default Global registration template setting to your custom global registration template, Point the Default 'Host initial configuration' template setting to your custom initial configuration template. 4.4. Registering hosts by using the bootstrap script You can use the bootstrap script to automate content registration and Puppet configuration. Important The bootstrap script is a deprecated feature. Deprecated functionality is still included in Satellite and continues to be supported. However, it will be removed in a future release of this product and is not recommended for new deployments. Use Section 4.3, "Registering hosts by using global registration" instead. For the most recent list of major functionality that has been deprecated or removed within Satellite, refer to the Deprecated features section of the Satellite release notes. You can use the bootstrap script to register new hosts, or to migrate existing hosts from RHN, SAM, RHSM, or another Red Hat Satellite instance. The katello-client-bootstrap package is installed by default on Satellite Server's base operating system. The bootstrap.py script is installed in the /var/www/html/pub/ directory to make it available to hosts at satellite.example.com /pub/bootstrap.py . The script includes documentation in the /usr/share/doc/katello-client-bootstrap- version /README.md file. To use the bootstrap script, you must install it on the host. As the script is only required once, and only for the root user, you can place it in /root or /usr/local/sbin and remove it after use. This procedure uses /root . Limitations Reverse proxying on Capsules is disabled by default for security reasons. Therefore, the bootstrap script does not work if you register hosts through Capsule. Red Hat recommends using global registration to register hosts instead. Prerequisites You have a Satellite user with the permissions required to run the bootstrap script. The examples in this procedure specify the admin user. If this is not acceptable to your security policy, create a new role with the minimum permissions required and add it to the user that will run the script. For more information, see Section 4.4.1, "Setting permissions for the bootstrap script" . You have an activation key for your hosts with the Red Hat Satellite Client 6 repository enabled. For information on configuring activation keys, see Managing Activation Keys in Managing content . You have created a host group. For more information about creating host groups, see Section 3.2, "Creating a host group" . Puppet considerations If a host group is associated with a Puppet environment created inside a Production environment, Puppet fails to retrieve the Puppet CA certificate while registering a host from that host group. To create a suitable Puppet environment to be associated with a host group, follow these steps: Manually create a directory: In the Satellite web UI, navigate to Configure > Puppet ENC > Environments . Click Import environment from . The button name includes the FQDN of the internal or external Capsule. Choose the created directory and click Update . Procedure Log in to the host as the root user. Download the script: Make the script executable: Confirm that the script is executable by viewing the help text: On Red Hat Enterprise Linux 8: On other Red Hat Enterprise Linux versions: Enter the bootstrap command with values suitable for your environment. For the --server option, specify the FQDN of Satellite Server or a Capsule Server. For the --location , --organization , and --hostgroup options, use quoted names, not labels, as arguments to the options. For advanced use cases, see Section 4.4.2, "Advanced bootstrap script configuration" . On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: Enter the password of the Satellite user you specified with the --login option. The script sends notices of progress to stdout . When prompted by the script, approve the host's Puppet certificate. In the Satellite web UI, navigate to Infrastructure > Capsules and find the Satellite or Capsule Server you specified with the --server option. From the list in the Actions column, select Certificates . In the Actions column, click Sign to approve the host's Puppet certificate. Return to the host to see the remainder of the bootstrap process completing. In the Satellite web UI, navigate to Hosts > All Hosts and ensure that the host is connected to the correct host group. Optional: After the host registration is complete, remove the script: 4.4.1. Setting permissions for the bootstrap script Use this procedure to configure a Satellite user with the permissions required to run the bootstrap script. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Select an existing user by clicking the required Username . A new pane opens with tabs to modify information about the selected user. Alternatively, create a new user specifically for the purpose of running this script. Click the Roles tab. Select Edit hosts and Viewer from the Roles list. Important The Edit hosts role allows the user to edit and delete hosts as well as being able to add hosts. If this is not acceptable to your security policy, create a new role with the following permissions and assign it to the user: view_organizations view_locations view_domains view_hostgroups view_hosts view_architectures view_ptables view_operatingsystems create_hosts Click Submit . CLI procedure Create a role with the minimum permissions required by the bootstrap script. This example creates a role with the name Bootstrap : Assign the new role to an existing user: Alternatively, you can create a new user and assign this new role to them. For more information on creating users with Hammer, see Managing Users and Roles in Administering Red Hat Satellite . 4.4.2. Advanced bootstrap script configuration This section has more examples for using the bootstrap script to register or migrate a host. Warning These examples specify the admin Satellite user. If this is not acceptable to your security policy, create a new role with the minimum permissions required by the bootstrap script. For more information, see Section 4.4.1, "Setting permissions for the bootstrap script" . 4.4.2.1. Migrating a host from one Satellite to another Satellite Use the script with --force to remove the katello-ca-consumer-* packages from the old Satellite and install the katello-ca-consumer-* packages on the new Satellite. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.2. Migrating a host from Red Hat Network (RHN) or Satellite 5 to Satellite The bootstrap script detects the presence of /etc/syconfig/rhn/systemid and a valid connection to RHN as an indicator that the system is registered to a legacy platform. The script then calls rhn-classic-migrate-to-rhsm to migrate the system from RHN. By default, the script does not delete the system's legacy profile due to auditing reasons. To remove the legacy profile, use --legacy-purge , and use --legacy-login to supply a user account that has appropriate permissions to remove a profile. Enter the user account password when prompted. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.3. Registering a host to Satellite without Puppet By default, the bootstrap script configures the host for content management and configuration management. If you have an existing configuration management system and do not want to install Puppet on the host, use --skip-puppet . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.4. Registering a host to Satellite for content management only To register a system as a content host, and omit the provisioning and configuration management functions, use --skip-foreman . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.5. Changing the method the bootstrap script uses to download the consumer RPM By default, the bootstrap script uses HTTP to download the consumer RPM from http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm . In some environments, you might want to allow HTTPS only between the host and Satellite. Use --download-method to change the download method from HTTP to HTTPS. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.6. Providing the host's IP address to Satellite On hosts with multiple interfaces or multiple IP addresses on one interface, you might need to override the auto-detection of the IP address and provide a specific IP address to Satellite. Use --ip . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.7. Enabling remote execution on the host Use --rex and --rex-user to enable remote execution and add the required SSH keys for the specified user. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.8. Creating a domain for a host during registration To create a host record, the DNS domain of a host needs to exist in Satellite prior to running the script. If the domain does not exist, add it using --add-domain . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.4.2.9. Providing an alternative FQDN for the host If the host's host name is not an FQDN, or is not RFC-compliant (containing a character such as an underscore), the script will fail at the host name validation stage. If you cannot update the host to use an FQDN that is accepted by Satellite, you can use the bootstrap script to specify an alternative FQDN. Procedure Set create_new_host_when_facts_are_uploaded and create_new_host_when_report_is_uploaded to false using Hammer: Use --fqdn to specify the FQDN that will be reported to Satellite: On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 4.5. Installing Tracer Use this procedure to install Tracer on Red Hat Satellite and access Traces. Tracer displays a list of services and applications that are outdated and need to be restarted. Traces is the output generated by Tracer in the Satellite web UI. Prerequisites The host is registered to Red Hat Satellite. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Procedure On the content host, install the katello-host-tools-tracer RPM package: Enter the following command: In the Satellite web UI, navigate to Hosts > All Hosts , then click the required host name. Click the Traces tab to view Traces. If it is not installed, an Enable Traces button initiates a remote execution job that installs the package. 4.6. Installing and configuring Puppet agent during host registration You can install and configure the Puppet agent on the host during registration. A configured Puppet agent is required on the host for Puppet integration with your Satellite. For more information about Puppet, see Managing configurations by using Puppet integration . Prerequisites Puppet must be enabled in your Satellite. For more information, see Enabling Puppet Integration with Satellite in Managing configurations by using Puppet integration . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . You have an activation key. For more information, see Managing Activation Keys in Managing content . Procedure In the Satellite web UI, navigate to Configure > Global Parameters to add host parameters globally. Alternatively, you can navigate to Configure > Host Groups and edit or create a host group to add host parameters only to a host group. Enable the Puppet agent using a host parameter in global parameters or a host group. Add a host parameter named enable-puppet7 , select the boolean type, and set the value to true . Specify configuration for the Puppet agent using the following host parameters in global parameters or a host group: Add a host parameter named puppet_server , select the string type, and set the value to the hostname of your Puppet server, such as puppet.example.com . Optional: Add a host parameter named puppet_ca_server , select the string type, and set the value to the hostname of your Puppet CA server, such as puppet-ca.example.com . If puppet_ca_server is not set, the Puppet agent will use the same server as puppet_server . Optional: Add a host parameter named puppet_environment , select the string type, and set the value to the Puppet environment you want the host to use. Until the BZ2177730 is resolved, you must use host parameters to specify the Puppet agent configuration even in integrated setups where the Puppet server is a Capsule Server. Navigate to Hosts > Register Host and register your host using an appropriate activation key. For more information, see Registering Hosts in Managing hosts . Navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. 4.7. Installing and configuring Puppet agent manually You can install and configure the Puppet agent on a host manually. A configured Puppet agent is required on the host for Puppet integration with your Satellite. For more information about Puppet, see Managing configurations by using Puppet integration . Prerequisites Puppet must be enabled in your Satellite. For more information, see Enabling Puppet Integration with Satellite in Managing configurations by using Puppet integration . The host must have a Puppet environment assigned to it. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Procedure Log in to the host as the root user. Install the Puppet agent package. On hosts running Red Hat Enterprise Linux 8 and above: On hosts running Red Hat Enterprise Linux 7 and below: Add the Puppet agent to PATH in your current shell using the following script: Configure the Puppet agent. Set the environment parameter to the name of the Puppet environment to which the host belongs: Start the Puppet agent service: Create a certificate for the host: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. On the host, run the Puppet agent again: 4.8. Running Ansible roles during host registration You can run Ansible roles when you are registering a host to Satellite. Prerequisites The required Ansible roles have been imported from your Capsule to Satellite. For more information, see Importing Ansible roles and variables in Managing configurations by using Ansible integration . Procedure Create a host group with Ansible roles. For more information, see Section 3.2, "Creating a host group" . Register the host by using the host group with assigned Ansible roles. For more information, see Section 4.3.3, "Registering a host" . 4.9. Using custom SSL certificate for hosts You can use custom SSL certificate on your hosts to enable encrypted communications between Satellite Server, Capsule Server, and hosts. Before deploying it to your hosts, ensure that you have configured the custom SSL certificate to your Satellite Server. 4.9.1. Deploying a custom SSL certificate to hosts After you configure Satellite to use a custom SSL certificate, you must deploy the certificate to hosts registered to Satellite. Procedure Update the SSL certificate on each host: 4.10. Resetting custom SSL certificate to default self-signed certificate on hosts To reset the custom SSL certificate on your hosts to default self-signed certificate, you must re-register your hosts through Global Registration . For more information, see Section 4.3, "Registering hosts by using global registration" . Additional Resources Reset custom SSL certificate to default self-signed certificate on Satellite Server in Installing Satellite Server in a connected network environment . Reset custom SSL certificate to default self-signed certificate on Capsule Server in Installing Capsule Server .
[ "systemctl enable --now chronyd", "chkconfig --add ntpd chkconfig ntpd on service ntpd start", "cp My_SSL_CA_file .pem /etc/pki/ca-trust/source/anchors", "update-ca-trust", "mkdir /etc/puppetlabs/code/environments/ example_environment", "curl -O http:// satellite.example.com /pub/bootstrap.py", "chmod +x bootstrap.py", "/usr/libexec/platform-python bootstrap.py -h", "./bootstrap.py -h", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"", "./bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"", "rm bootstrap.py", "ROLE='Bootstrap' hammer role create --name \"USDROLE\" hammer filter create --role \"USDROLE\" --permissions view_organizations hammer filter create --role \"USDROLE\" --permissions view_locations hammer filter create --role \"USDROLE\" --permissions view_domains hammer filter create --role \"USDROLE\" --permissions view_hostgroups hammer filter create --role \"USDROLE\" --permissions view_hosts hammer filter create --role \"USDROLE\" --permissions view_architectures hammer filter create --role \"USDROLE\" --permissions view_ptables hammer filter create --role \"USDROLE\" --permissions view_operatingsystems hammer filter create --role \"USDROLE\" --permissions create_hosts", "hammer user add-role --id user_id --role Bootstrap", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet", "/usr/libexec/platform-python bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman", "bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain", "hammer settings set --name create_new_host_when_facts_are_uploaded --value false hammer settings set --name create_new_host_when_report_is_uploaded --value false", "/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com", "bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com", "yum install katello-host-tools-tracer", "katello-tracer-upload", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap", "dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/registering_hosts_to_server_managing-hosts
Chapter 5. Exposing the registry
Chapter 5. Exposing the registry By default, the OpenShift image registry is secured during cluster installation so that it serves traffic through TLS. Unlike versions of OpenShift Container Platform, the registry is not exposed outside of the cluster at the time of installation. 5.1. Exposing a default registry manually Instead of logging in to the default OpenShift image registry from within the cluster, you can gain external access to it by exposing it with a route. This external access enables you to log in to the registry from outside the cluster using the route address and to tag and push images to an existing project by using the route host. Prerequisites The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. You have access to the cluster as a user with the cluster-admin role. Procedure You can expose the route by using the defaultRoute parameter in the configs.imageregistry.operator.openshift.io resource. To expose the registry using the defaultRoute : Set defaultRoute to true by running the following command: USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Get the default registry route by running the following command: USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Get the certificate of the Ingress Operator by running the following command: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm Move the extracted certificate to the system's trusted CA directory by running the following command: USD sudo mv tls.crt /etc/pki/ca-trust/source/anchors/ Enable the cluster's default certificate to trust the route by running the following command: USD sudo update-ca-trust enable Log in with podman using the default route by running the following command: USD sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST 5.2. Exposing a secure registry manually Instead of logging in to the OpenShift image registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images to an existing project by using the route host. Prerequisites The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. You have access to the cluster as a user with the cluster-admin role. Procedure You can expose the route by using DefaultRoute parameter in the configs.imageregistry.operator.openshift.io resource or by using custom routes. To expose the registry using DefaultRoute : Set DefaultRoute to True : USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Log in with podman : USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') USD podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1 1 --tls-verify=false is needed if the cluster's default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator. To expose the registry using custom routes: Create a secret with your route's TLS keys: USD oc create secret tls public-route-tls \ -n openshift-image-registry \ --cert=</path/to/tls.crt> \ --key=</path/to/tls.key> This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator. On the Registry Operator: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls ... Note Only set secretName if you are providing a custom TLS configuration for the registry's route. Troubleshooting Error creating TLS secret
[ "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "sudo mv tls.crt /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust enable", "sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1", "oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/registry/securing-exposing-registry
28.2. How Password Policies Work in IdM
28.2. How Password Policies Work in IdM All users must have a password that they use to authenticate to the Identity Management (IdM) Kerberos domain. Password policies in IdM define the requirements these user passwords must meet. Note The IdM password policy is set in the underlying LDAP directory, but is also enforced by the Kerberos Key Distribution Center (KDC). 28.2.1. Supported Password Policy Attributes Table 28.1, "Password Policy Attributes" lists the attributes that password policies in IdM can define. Table 28.1. Password Policy Attributes Attribute Explanation Example Max lifetime The maximum amount of time (in days) that a password is valid before a user must reset it. Max lifetime = 90 User passwords are valid only for 90 days. After that, IdM prompts users to change them. Min lifetime The minimum amount of time (in hours) that must pass between two password change operations. Min lifetime = 1 After users change their passwords, they must wait at least 1 hour before changing them again. History size How many passwords are stored. A user cannot reuse a password from their password history. History size = 0 Users can reuse any of their passwords. Character classes The number of different character classes the user must use in the password. The character classes are: Uppercase characters Lowercase characters Digits Special characters, such as comma (,), period (.), asterisk (*) Other UTF-8 characters Using a character three or more times in a row decreases the character class by one. For example: Secret1 has 3 character classes: uppercase, lowercase, digits Secret111 has 2 character classes: uppercase, lowercase, digits, and a -1 penalty for using 1 repeatedly Character classes = 0 The default number of classes required is 0. To configure the number, run the ipa pwpolicy-mod command with the --minclasses option. This command sets the required number of character classes to 1: See also the Important note below this table. Min length The minimum number of characters in a password. Min length = 8 Users cannot use passwords shorter than 8 characters. Max failures The maximum number of failed login attempts before IdM locks the user account. See also Section 22.1.3, "Unlocking User Accounts After Password Failures" . Max failures = 6 IdM locks the user account the user enters a wrong password 7 times in a row. Failure reset interval The amount of time (in seconds) after which IdM resets the current number of failed login attempts. Failure reset interval = 60 If the user waits for more than 1 minute after the number of failed login attempts defined in Max failures , the user can attempt to log in again without risking a user account lock. Lockout duration The amount of time (in seconds) for which the user account is locked after the number of failed login attempts defined in Max failures . See also Section 22.1.3, "Unlocking User Accounts After Password Failures" . Lockout duration = 600 Users with locked accounts are unable to log in for 10 minutes. Important Use the English alphabet and common symbols for the character classes requirement if you have a diverse set of hardware that may not have access to international characters and symbols. For more information on character class policies in passwords, see What characters are valid in a password? in Red Hat Knowledgebase. 28.2.2. Global and Group-specific Password Policies The default password policy is the global password policy . Apart from the global policy, you can create additional group password policies . Global password policy Installing the initial IdM server automatically creates a global password policy with default settings. The global policy rules apply to all users without a group password policy. Group password policies Group password policies apply to all members of the corresponding user group. Only one password policy can be in effect at a time for any user. If a user has multiple password policies assigned, one of them takes precedence based on priority. See Section 28.2.3, "Password Policy Priorities" . 28.2.3. Password Policy Priorities Every group password policy has a priority set. The lower the value, the higher the policy's priority. The lowest supported priority value is 0 . If multiple password policies are applicable to a user, the policy with the lowest priority value takes precedence. All rules defined in other policies are ignored. The password policy with the lowest priority value applies to all password policy attributes, even the attributes that are not defined in the policy. The global password policy does not have a priority value set. It serves as a fallback policy when no group policy is set for a user. The global policy can never take precedence over a group policy. Table 28.2, "Example of Applying Password Policy Attributes Based on Priority" demonstrates how password policy priorities work on an example of a user who belongs to two groups with a policy defined. Table 28.2. Example of Applying Password Policy Attributes Based on Priority Max lifetime Min length Policy for group A (priority 0) 60 10 Policy for group B (priority 1) 90 0 (no restriction) v v User (member of group A and group B) 60 10 Note The ipa pwpolicy-show --user= user_name command shows which policy is currently in effect for a particular user.
[ "ipa pwpolicy-mod --minclasses= 1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/pwd-policies-how
Chapter 14. Troubleshooting common installation problems
Chapter 14. Troubleshooting common installation problems If you are experiencing difficulties installing the Red Hat OpenShift Data Science Add-on, read this section to understand what could be causing the problem, and how to resolve the problem. If you cannot see the problem here or in the release notes, contact Red Hat Support. 14.1. The Red Hat OpenShift AI Operator cannot be retrieved from the image registry Problem When attempting to retrieve the Red Hat OpenShift AI Operator from the image registry, an Failure to pull from quay error message appears. The Red Hat OpenShift AI Operator might be unavailable for retrieval in the following circumstances: The image registry is unavailable. There is a problem with your network connection. Your cluster is not operational and is therefore unable to retrieve the image registry. Diagnosis Check the logs in the Events section in OpenShift Dedicated for further information about the Failure to pull from quay error message. Resolution To resolve this issue, contact Red Hat support. 14.2. OpenShift AI cannot be installed due to insufficient cluster resources Problem When attempting to install OpenShift AI, an error message appears stating that installation prerequisites have not been met. Diagnosis Log in to OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). Click Clusters . The Clusters page opens. Click the name of the cluster you want to install OpenShift AI on. The Details page for the cluster opens. Click the Add-ons tab and locate the Red Hat OpenShift Data Science tile. Click Install . The Configure Red Hat OpenShift Data Science pane appears. If the installation fails, click the Prerequisites tab. Note down the error message. If the error message states that you require a new machine pool, or that more resources are required, take the appropriate action to resolve the problem. Resolution You might need to add more resources to your cluster, or increase the size of your machine pool. To increase your cluster's resources, contact your infrastructure administrator. For more information about increasing the size of your machine pool, see Nodes and Allocating additional resources to OpenShift AI users . 14.3. The dedicated-admins Role-based access control (RBAC) policy cannot be created Problem The Role-based access control (RBAC) policy for the dedicated-admins group in the target project cannot be created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: Attempt to create the RBAC policy for dedicated admins group in USDtarget_project failed. error message. Resolution Contact Red Hat support. 14.4. OpenShift AI does not install on unsupported infrastructure Problem Customer deploying on an environment not documented as being supported by the RHODS operator. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: Deploying on USDinfrastructure, which is not supported. Failing Installation error message. Resolution Before proceeding with a new installation, ensure that you have a fully supported environment on which to install OpenShift AI. For more information, see Requirements for OpenShift AI . 14.5. The creation of the OpenShift AI Custom Resource (CR) fails Problem During the installation process, the OpenShift AI Custom Resource (CR) does not get created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: Attempt to create the ODH CR failed. error message. Resolution Contact Red Hat support. 14.6. The creation of the OpenShift AI Notebooks Custom Resource (CR) fails Problem During the installation process, the OpenShift AI Notebooks Custom Resource (CR) does not get created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: Attempt to create the RHODS Notebooks CR failed. error message. Resolution Contact Red Hat support. 14.7. The Dead Man's Snitch operator's secret does not get created Problem An issue with Managed Tenants SRE automation process causes the Dead Man's Snitch operator's secret to not get created. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: Dead Man Snitch secret does not exist. error message. Resolution Contact Red Hat support. 14.8. The PagerDuty secret does not get created Problem An issue with Managed Tenants SRE automation process causes the PagerDuty's secret to not get created. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: Pagerduty secret does not exist error message. Resolution Contact Red Hat support. 14.9. The SMTP secret does not exist Problem An issue with Managed Tenants SRE automation process causes the SMTP secret to not get created. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: SMTP secret does not exist error message. Resolution Contact Red Hat support. 14.10. The ODH parameter secret does not get created Problem An issue with the OpenShift Data Science Add-on's flow could result in the ODH parameter secret to not get created. Diagnosis In the OpenShift Dedicated web console, change into the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-deployer from the drop-down list Check the log for the ERROR: Addon managed odh parameter secret does not exist. error message. Resolution Contact Red Hat support.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_the_openshift_ai_cloud_service/troubleshooting-common-installation-problems_install
Part IV. Publishing Applications to a Container
Part IV. Publishing Applications to a Container To publish Fuse Integration projects to a server container, you must first add the server and its runime definition to the tooling's Servers list. Then you can assign projects to the server runtime and set the publishing options for it.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/riderdeploypart
Chapter 13. Scoping tokens
Chapter 13. Scoping tokens 13.1. About scoping tokens You can create scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods. A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the cluster-admin role can create scoped tokens. Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules . Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks. 13.1.1. User scopes User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you: user:full - Allows full read/write access to the API with all of the user's permissions. user:info - Allows read-only access to information about the user, such as name and groups. user:check-access - Allows access to self-localsubjectaccessreviews and self-subjectaccessreviews . These are the variables where you pass an empty user and groups in your request object. user:list-projects - Allows read-only access to list the projects the user has access to. 13.1.2. Role scope The role scope allows you to have the same level of access as a given role filtered by namespace. role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified by the cluster-role, but only in the specified namespace . Note Caveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like edit as being an escalating role, but with access to a secret it is. role:<cluster-role name>:<namespace or * for all>:! - This is similar to the example above, except that including the bang causes this scope to allow escalating access.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/tokens-scoping
Chapter 6. Creating the control plane
Chapter 6. Creating the control plane The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload. Note Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell ( rsh ) to run OpenStack CLI commands. 6.1. Prerequisites The OpenStack Operator ( openstack-operator ) is installed. For more information, see Installing and preparing the Operators . The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks . The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack ). Use the following command to check the existing network policies on the cluster: You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. 6.2. Creating the control plane Define an OpenStackControlPlane custom resource (CR) to perform the following tasks: Create the control plane. Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services. The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR . Tip Use the following commands to view the OpenStackControlPlane CRD definition and specification schema: Procedure Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR: Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services : Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end: Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class . Add the following service configurations: Note The following service snippets use IP addresses from the default RHOSO MetalLB IPAddressPool range for the loadBalancerIPs field. Update the loadBalancerIPs field with the IP address from the MetalLB IPAddressPool range you created in step 12 of the Preparing RHOCP for RHOSO networks procedure. Block Storage service (cinder): 1 You can deploy the initial control plane without activating the cinderBackup service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage . 2 You can deploy the initial control plane without activating the cinderVolumes service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the cinderVolumes service and how to configure a back end for the service, see Configuring the volume service in Configuring persistent storage . Compute service (nova): Note A full set of Compute services (nova) are deployed by default for each of the default cells, cell0 and cell1 : nova-api , nova-metadata , nova-scheduler , and nova-conductor . The novncproxy service is also enabled for cell1 by default. DNS service for the data plane: 1 Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to. 2 Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values: server rev-server srv-host txt-record ptr-record rebind-domain-ok naptr-record cname host-record caa-record dns-rr auth-zone synth-domain no-negcache local 3 Specifies the values for the dnsmasq parameter. You can specify a generic DNS server as the value, for example, 1.1.1.1 , or a DNS server for a specific domain, for example, /google.com/8.8.8.8 . Identity service (keystone) Image service (glance): 1 You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage . If you do not deploy the Image service, you cannot upload images to the cloud or start an instance. Key Management service (barbican): Networking service (neutron): Object Storage service (swift): OVN: Placement service (placement): Telemetry service (ceilometer, prometheus): 1 You must have the autoscaling field present, even if autoscaling is disabled. Add the following service configurations to implement high availability (HA): A MariaDB Galera cluster for use by all RHOSO services ( openstack ), and a MariaDB Galera cluster for use by the Compute service for cell1 ( openstack-cell1 ): A single memcached cluster that contains three memcached servers: A RabbitMQ cluster for use by all RHOSO services ( rabbitmq ), and a RabbitMQ cluster for use by the Compute service for cell1 ( rabbitmq-cell1 ): Note Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses. Create the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Note Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell ( rsh ) to run OpenStack CLI commands. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace: The control plane is deployed when all the pods are either completed or running. Verification Open a remote shell connection to the OpenStackClient pod: Confirm that the internal service endpoints are registered with each service: Exit the OpenStackClient pod: 6.3. Example OpenStackControlPlane CR The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment. 1 The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. 2 Service-specific parameters for the Block Storage service (cinder). 3 The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. 4 The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. 5 The list of networks that each service pod is directly attached to, specified by using the NetworkAttachmentDefinition resource names. A NIC is configured for the service for each specified network attachment. Note If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the ovnDBCluster service uses the internalapi network; and the ovnController service uses the tenant network. 6 Service-specific parameters for the Compute service (nova). 7 Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set route: to {} to apply the default route template. 8 The internal service API endpoint registered as a MetalLB service with the IPAddressPool internalapi . 9 The virtual IP (VIP) address for the service. The IP is shared with other services by default. 10 The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in the loadBalancerIPs annotation, as indicated in 11 and 12 . Note Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses. 11 The distinct IP address for a RabbitMQ instance that is exposed to an isolated network. 12 The distinct IP address for a RabbitMQ instance that is exposed to an isolated network. 6.4. Removing a service from the control plane You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0 . Warning Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service. Procedure Open the OpenStackControlPlane CR file on your workstation. Locate the service you want to remove from the control plane and disable it: Update the control plane: Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status: The OpenStackControlPlane resource is updated with the disabled service when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the openstack namespace: Check that the service is removed: This command returns the following message when the service is successfully removed: Check that the API endpoints for the service are removed from the Identity service (keystone): This command returns the following message when the API endpoints for the service are successfully removed: 6.5. Additional resources Kubernetes NMState Operator The Kubernetes NMState project Load balancing with MetalLB MetalLB documentation MetalLB in layer 2 mode Specify network interfaces that LB IP can be announced from Multiple networks Using the Multus CNI in OpenShift macvlan plugin whereabouts IPAM CNI plugin - Extended configuration About advertising for the IP address pools Dynamic provisioning Configuring the Block Storage backup service in Configuring persistent storage . Configuring the Image service (glance) in Configuring persistent storage .
[ "oc get networkpolicy -n openstack", "oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret", "spec: secret: osp-secret storageClass: <RHOCP_storage_class>", "cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 1 cinderVolumes: volume1: networkAttachments: - storage replicas: 0 2", "nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secret", "dns: template: options: 1 - key: server 2 values: 3 - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2", "keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3", "glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # Configure back end; set to 3 when deploying service 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage", "barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1", "neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi", "swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 secret: osp-secret swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageRequest: 10Gi", "ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replcas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {}", "placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret", "telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: 1 enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword rabbitMqClusterName: rabbitmq serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80", "galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3", "memcached: templates: memcached: replicas: 3", "rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer", "oc create -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started", "oc rsh -n openstack openstackclient", "oc get pods -n openstack", "oc rsh -n openstack openstackclient", "openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | https://glance-internal.openstack.svc | | glance | public | https://glance-default-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+", "exit", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret storageClass: your-RHOCP-storage-class 1 cinder: 2 apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: 3 networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service cinderVolumes: 4 volume1: networkAttachments: 5 - storage replicas: 0 # backend needs to be configured to activate the service nova: 6 apiOverride: 7 route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi 8 metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 9 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secret dns: template: options: - key: server values: - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2 galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3 keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3 glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # Configure back end; set to 3 when deploying service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1 memcached: templates: memcached: replicas: 3 neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 swiftRing: ringReplicas: 1 swiftStorage: networkAttachments: - storage replicas: 1 storageRequest: 10Gi ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {} placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret rabbitmq: 10 templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 11 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 12 spec: type: LoadBalancer telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword rabbitMqClusterName: rabbitmq serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80", "cinder: enabled: false apiOverride: route: {}", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started", "oc get pods -n openstack", "oc get cinder -n openstack", "No resources found in openstack namespace.", "oc rsh -n openstack openstackclient openstack endpoint list --service volumev3", "No service with a type, name or ID of 'volumev3' exists." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_dynamic_routing_environment/assembly_creating-the-control-plane
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM LinuxONE
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.15, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.2 or later On your z/VM instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up. Disk storage FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.2 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE (IBM(R) Documentation). IBM Z network connectivity requirements To install on IBM Z(R) under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSWITCH in layer 2 Ethernet mode set up. Disk storage FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM(R) Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. The machines are configured with static IP addresses. No DHCP server is required. Ensure that the machines have persistent IP addresses and hostnames. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 2.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 5 1 For installations on DASD-type disks, add coreos.inst.install_dev=/dev/dasda . Omit this value for FCP-type disks. 2 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 3 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 4 For installations on FCP-type disks, add zfcp.allow_lun_scan=0 . Omit this value for DASD-type disks. 5 For installations on DASD-type disks, replace with rd.dasd=0.0.3490 to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 2.13.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.13.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.14. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.16. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.17. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 2.17.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.17.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.17.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.18. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.19. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 2.20. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z
Appendix A. Changes to packages
Appendix A. Changes to packages The following chapters contain changes to packages between RHEL 8 and RHEL 9, as well as changes between minor releases of RHEL 9. A.1. New packages The following packages were added in RHEL 9: Package Repository New in 389-ds-base-devel rhel9-CRB RHEL 9.3 389-ds-base-snmp rhel9-AppStream RHEL 9.5 a52dec rhel9-AppStream RHEL 9.0 adobe-source-code-pro-fonts rhel9-AppStream RHEL 9.1 afterburn rhel9-AppStream RHEL 9.0 afterburn-dracut rhel9-AppStream RHEL 9.0 alsa-plugins-pulseaudio rhel9-AppStream RHEL 9.1 alternatives rhel9-BaseOS RHEL 9.0 anaconda-install-img-deps rhel9-AppStream RHEL 9.0 anaconda-widgets-devel rhel9-CRB RHEL 9.1 ansible-freeipa-collection rhel9-AppStream RHEL 9.5 ant-junit5 rhel9-AppStream RHEL 9.0 anthy-unicode rhel9-AppStream RHEL 9.0 anthy-unicode-devel rhel9-CRB RHEL 9.1 appstream rhel9-AppStream RHEL 9.0 appstream-compose rhel9-CRB RHEL 9.0 appstream-compose-devel rhel9-CRB RHEL 9.0 appstream-devel rhel9-CRB RHEL 9.0 appstream-qt rhel9-CRB RHEL 9.0 appstream-qt-devel rhel9-CRB RHEL 9.0 aspnetcore-runtime-7.0 rhel9-AppStream RHEL 9.1 aspnetcore-runtime-8.0 rhel9-AppStream RHEL 9.4 aspnetcore-runtime-9.0 rhel9-AppStream RHEL 9.5 aspnetcore-runtime-dbg-8.0 rhel9-AppStream RHEL 9.5 aspnetcore-runtime-dbg-9.0 rhel9-AppStream RHEL 9.5 aspnetcore-targeting-pack-7.0 rhel9-AppStream RHEL 9.1 aspnetcore-targeting-pack-8.0 rhel9-AppStream RHEL 9.4 aspnetcore-targeting-pack-9.0 rhel9-AppStream RHEL 9.5 autoconf-latest rhel9-AppStream RHEL 9.4 autoconf271 rhel9-AppStream RHEL 9.4 autocorr-dsb rhel9-AppStream RHEL 9.0 autocorr-el rhel9-AppStream RHEL 9.0 autocorr-hsb rhel9-AppStream RHEL 9.0 autocorr-vro rhel9-AppStream RHEL 9.0 avahi-autoipd rhel9-AppStream RHEL 9.5 avahi-glib-devel rhel9-CRB RHEL 9.3 avahi-gobject rhel9-CRB RHEL 9.5 avahi-gobject-devel rhel9-CRB RHEL 9.5 avahi-tools rhel9-AppStream RHEL 9.3 awscli2 rhel9-AppStream RHEL 9.5 babel-doc rhel9-CRB RHEL 9.0 bind9.18 rhel9-AppStream RHEL 9.5 bind9.18-chroot rhel9-AppStream RHEL 9.5 bind9.18-devel rhel9-CRB RHEL 9.5 bind9.18-dnssec-utils rhel9-AppStream RHEL 9.5 bind9.18-doc rhel9-CRB RHEL 9.5 bind9.18-libs rhel9-AppStream RHEL 9.5 bind9.18-utils rhel9-AppStream RHEL 9.5 bind-dnssec-doc rhel9-AppStream RHEL 9.0 bind-dnssec-utils rhel9-AppStream RHEL 9.0 bind-doc rhel9-CRB RHEL 9.1 binutils-gold rhel9-BaseOS RHEL 9.0 blas64 rhel9-CRB RHEL 9.3 blas64_ rhel9-CRB RHEL 9.0 bmc-snmp-proxy rhel9-AppStream RHEL 9.0 boost-b2 rhel9-CRB RHEL 9.0 boost-contract rhel9-AppStream RHEL 9.0 boost-doctools rhel9-CRB RHEL 9.0 boost-json rhel9-AppStream RHEL 9.0 boost-nowide rhel9-AppStream RHEL 9.0 bootc rhel9-AppStream RHEL 9.4 bootupd rhel9-AppStream RHEL 9.0 Box2D rhel9-AppStream RHEL 9.0 butane rhel9-AppStream RHEL 9.0 byte-buddy rhel9-AppStream RHEL 9.0 byte-buddy-agent rhel9-CRB RHEL 9.0 byteman-bmunit rhel9-AppStream RHEL 9.0 catatonit rhel9-CRB RHEL 9.1 capstone rhel9-AppStream RHEL 9.2 capstone-devel rhel9-CRB RHEL 9.2 capstone-java rhel9-CRB RHEL 9.2 cdrskin rhel9-AppStream RHEL 9.0 cepces rhel9-AppStream RHEL 9.4 cepces-certmonger rhel9-AppStream RHEL 9.4 cepces-selinux rhel9-AppStream RHEL 9.4 cifs-utils-devel rhel9-CRB RHEL 9.2 cjose-devel rhel9-CRB RHEL 9.5 cldr-emoji-annotation-dtd rhel9-AppStream RHEL 9.0 clevis-pin-tpm2 rhel9-AppStream RHEL 9.0 cockpit-files rhel9-AppStream RHEL 9.5 cockpit-ostree rhel9-AppStream RHEL 9.3 compat-hesiod rhel9-AppStream RHEL 9.0 compat-openssl11 rhel9-AppStream RHEL 9.0 compat-paratype-pt-sans-fonts-f33-f34 rhel9-AppStream RHEL 9.0 compat-sap-c++-12 rhel9-SAP RHEL 9.2 compat-sap-c++-13 rhel9-SAP RHEL 9.5 composefs rhel9-AppStream RHEL 9.4 composefs-libs rhel9-AppStream RHEL 9.4 console-login-helper-messages rhel9-AppStream RHEL 9.0 console-login-helper-messages-issuegen rhel9-AppStream RHEL 9.0 console-login-helper-messages-motdgen rhel9-AppStream RHEL 9.0 console-login-helper-messages-profile rhel9-AppStream RHEL 9.0 console-setup rhel9-AppStream RHEL 9.0 container-tools rhel9-AppStream RHEL 9.0 cups-printerapp rhel9-AppStream RHEL 9.0 curl-minimal rhel9-BaseOS RHEL 9.0 cxl-cli rhel9-AppStream RHEL 9.2 cxl-devel rhel9-CRB RHEL 9.2 cxl-libs rhel9-AppStream RHEL 9.2 cyrus-imapd-libs rhel9-AppStream RHEL 9.0 dbus-broker rhel9-BaseOS RHEL 9.0 dbus-python-devel rhel9-CRB RHEL 9.4 ddiskit rhel9-AppStream RHEL 9.0 debugedit rhel9-AppStream RHEL 9.0 dejavu-lgc-sans-mono-fonts rhel9-AppStream RHEL 9.0 dejavu-lgc-serif-fonts rhel9-AppStream RHEL 9.0 docbook5-style-xsl rhel9-AppStream RHEL 9.0 docbook5-style-xsl-extensions rhel9-AppStream RHEL 9.0 dotnet-apphost-pack-7.0 rhel9-AppStream RHEL 9.1 dotnet-apphost-pack-8.0 rhel9-AppStream RHEL 9.4 dotnet-apphost-pack-9.0 rhel9-AppStream RHEL 9.5 dotnet-hostfxr-7.0 rhel9-AppStream RHEL 9.1 dotnet-hostfxr-8.0 rhel9-AppStream RHEL 9.4 dotnet-hostfxr-9.0 rhel9-AppStream RHEL 9.5 dotnet-runtime-7.0 rhel9-AppStream RHEL 9.1 dotnet-runtime-8.0 rhel9-AppStream RHEL 9.4 dotnet-runtime-9.0 rhel9-AppStream RHEL 9.5 dotnet-runtime-dbg-8.0 rhel9-AppStream RHEL 9.5 dotnet-runtime-dbg-9.0 rhel9-AppStream RHEL 9.5 dotnet-sdk-7.0 rhel9-AppStream RHEL 9.1 dotnet-sdk-7.0-source-built-artifacts rhel9-CRB RHEL 9.1 dotnet-sdk-8.0 rhel9-AppStream RHEL 9.4 dotnet-sdk-8.0-source-built-artifacts rhel9-CRB RHEL 9.4 dotnet-sdk-9.0 rhel9-AppStream RHEL 9.5 dotnet-sdk-9.0-source-built-artifacts rhel9-CRB RHEL 9.5 dotnet-sdk-aot-9.0 rhel9-AppStream RHEL 9.5 dotnet-sdk-dbg-8.0 rhel9-AppStream RHEL 9.5 dotnet-sdk-dbg-9.0 rhel9-AppStream RHEL 9.5 dotnet-targeting-pack-7.0 rhel9-AppStream RHEL 9.1 dotnet-targeting-pack-8.0 rhel9-AppStream RHEL 9.4 dotnet-targeting-pack-9.0 rhel9-AppStream RHEL 9.5 dotnet-templates-7.0 rhel9-AppStream RHEL 9.1 dotnet-templates-8.0 rhel9-AppStream RHEL 9.4 dotnet-templates-9.0 rhel9-AppStream RHEL 9.5 double-conversion rhel9-AppStream RHEL 9.0 double-conversion-devel rhel9-CRB RHEL 9.1 drgn rhel9-AppStream RHEL 9.4 ecj rhel9-AppStream RHEL 9.2 edk2-tools rhel9-CRB RHEL 9.2 edk2-tools-doc rhel9-CRB RHEL 9.2 efs-utils rhel9-AppStream RHEL 9.4 efs-utils-selinux rhel9-AppStream RHEL 9.4 egl-utils rhel9-AppStream RHEL 9.1 egl-wayland-devel rhel9-CRB RHEL 9.5 emacs-auctex rhel9-AppStream RHEL 9.0 emacs-cython-mode rhel9-CRB RHEL 9.0 espeak-ng-devel rhel9-CRB RHEL 9.3 evince-previewer rhel9-AppStream RHEL 9.0 evince-thumbnailer rhel9-AppStream RHEL 9.0 evolution-data-server-ui rhel9-AppStream RHEL 9.4 evolution-data-server-ui-devel rhel9-AppStream RHEL 9.4 exfatprogs rhel9-BaseOS RHEL 9.0 expect-devel rhel9-CRB RHEL 9.4 fapolicyd-dnf-plugin rhel9-AppStream RHEL 9.0 fdk-aac-free rhel9-AppStream RHEL 9.0 fdk-aac-free-devel rhel9-CRB RHEL 9.1 fence-agents-openstack rhel9-HighAvailability RHEL 9.0 festival rhel9-AppStream RHEL 9.0 festival-data rhel9-AppStream RHEL 9.0 festvox-slt-arctic-hts rhel9-AppStream RHEL 9.0 fido2-tools rhel9-AppStream RHEL 9.4 fio-engine-dev-dax rhel9-AppStream RHEL 9.0 fio-engine-http rhel9-AppStream RHEL 9.0 fio-engine-libaio rhel9-AppStream RHEL 9.0 fio-engine-libpmem rhel9-AppStream RHEL 9.0 fio-engine-nbd rhel9-AppStream RHEL 9.0 fio-engine-pmemblk rhel9-AppStream RHEL 9.0 fio-engine-rados rhel9-AppStream RHEL 9.0 fio-engine-rbd rhel9-AppStream RHEL 9.0 fio-engine-rdma rhel9-AppStream RHEL 9.0 firefox-x11 rhel9-AppStream RHEL 9.2 flashrom rhel9-AppStream RHEL 9.0 flexiblas rhel9-AppStream RHEL 9.0 flexiblas-devel rhel9-CRB RHEL 9.0 flexiblas-netlib rhel9-AppStream RHEL 9.0 flexiblas-netlib64 rhel9-CRB RHEL 9.0 flexiblas-openblas-openmp rhel9-AppStream RHEL 9.0 flexiblas-openblas-openmp64 rhel9-CRB RHEL 9.0 flexiblas-openblas-serial rhel9-AppStream RHEL 9.5 fonts-filesystem rhel9-BaseOS RHEL 9.0 fonts-rpm-macros rhel9-CRB RHEL 9.0 fonts-srpm-macros rhel9-AppStream RHEL 9.0 freeglut-devel rhel9-AppStream RHEL 9.1 freeradius-mysql rhel9-CRB RHEL 9.2 freeradius-perl rhel9-CRB RHEL 9.2 freeradius-postgresql rhel9-CRB RHEL 9.2 freeradius-rest rhel9-CRB RHEL 9.2 freeradius-sqlite rhel9-CRB RHEL 9.2 freeradius-unixODBC rhel9-CRB RHEL 9.2 frr-selinux rhel9-AppStream RHEL 9.2 fstrm-utils rhel9-CRB RHEL 9.0 fwupd-plugin-flashrom rhel9-AppStream RHEL 9.0 gawk-all-langpacks rhel9-AppStream RHEL 9.0 gcc-plugin-annobin rhel9-AppStream RHEL 9.0 gcc-toolset-12 rhel9-AppStream RHEL 9.1 gcc-toolset-12-annobin-annocheck rhel9-AppStream RHEL 9.1 gcc-toolset-12-annobin-docs rhel9-AppStream RHEL 9.1 gcc-toolset-12-annobin-plugin-gcc rhel9-AppStream RHEL 9.1 gcc-toolset-12-binutils rhel9-AppStream RHEL 9.1 gcc-toolset-12-binutils-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-binutils-gold rhel9-AppStream RHEL 9.1 gcc-toolset-12-build rhel9-AppStream RHEL 9.1 gcc-toolset-12-dwz rhel9-AppStream RHEL 9.1 gcc-toolset-12-gcc rhel9-AppStream RHEL 9.1 gcc-toolset-12-gcc-c++ rhel9-AppStream RHEL 9.1 gcc-toolset-12-gcc-gfortran rhel9-AppStream RHEL 9.1 gcc-toolset-12-gcc-plugin-annobin rhel9-AppStream RHEL 9.2 gcc-toolset-12-gcc-plugin-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-gdb rhel9-AppStream RHEL 9.1 gcc-toolset-12-libasan-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-libatomic-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-libgccjit rhel9-AppStream RHEL 9.1 gcc-toolset-12-libgccjit-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-libgccjit-docs rhel9-AppStream RHEL 9.1 gcc-toolset-12-libitm-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-liblsan-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-libquadmath-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-libstdc++-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-libstdc++-docs rhel9-AppStream RHEL 9.1 gcc-toolset-12-libtsan-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-libubsan-devel rhel9-AppStream RHEL 9.1 gcc-toolset-12-offload-nvptx rhel9-AppStream RHEL 9.1 gcc-toolset-12-runtime rhel9-AppStream RHEL 9.1 gcc-toolset-13 rhel9-AppStream RHEL 9.3 gcc-toolset-13-annobin-annocheck rhel9-AppStream RHEL 9.3 gcc-toolset-13-annobin-docs rhel9-AppStream RHEL 9.3 gcc-toolset-13-annobin-plugin-gcc rhel9-AppStream RHEL 9.3 gcc-toolset-13-binutils rhel9-AppStream RHEL 9.3 gcc-toolset-13-binutils-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-binutils-gold rhel9-AppStream RHEL 9.3 gcc-toolset-13-dwz rhel9-AppStream RHEL 9.3 gcc-toolset-13-gcc rhel9-AppStream RHEL 9.3 gcc-toolset-13-gcc-c++ rhel9-AppStream RHEL 9.3 gcc-toolset-13-gcc-gfortran rhel9-AppStream RHEL 9.3 gcc-toolset-13-gcc-plugin-annobin rhel9-AppStream RHEL 9.3 gcc-toolset-13-gcc-plugin-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-gdb rhel9-AppStream RHEL 9.3 gcc-toolset-13-libasan-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-libatomic-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-libgccjit rhel9-AppStream RHEL 9.3 gcc-toolset-13-libgccjit-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-libitm-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-liblsan-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-libquadmath-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-libstdc++-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-libstdc++-docs rhel9-AppStream RHEL 9.3 gcc-toolset-13-libtsan-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-libubsan-devel rhel9-AppStream RHEL 9.3 gcc-toolset-13-offload-nvptx rhel9-AppStream RHEL 9.3 gcc-toolset-13-runtime rhel9-AppStream RHEL 9.3 gcc-toolset-14 rhel9-AppStream RHEL 9.5 gcc-toolset-14-annobin-annocheck rhel9-AppStream RHEL 9.5 gcc-toolset-14-annobin-docs rhel9-AppStream RHEL 9.5 gcc-toolset-14-annobin-plugin-gcc rhel9-AppStream RHEL 9.5 gcc-toolset-14-binutils rhel9-AppStream RHEL 9.5 gcc-toolset-14-binutils-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-binutils-gold rhel9-AppStream RHEL 9.5 gcc-toolset-14-binutils-gprofng rhel9-AppStream RHEL 9.5 gcc-toolset-14-dwz rhel9-AppStream RHEL 9.5 gcc-toolset-14-gcc rhel9-AppStream RHEL 9.5 gcc-toolset-14-gcc-c++ rhel9-AppStream RHEL 9.5 gcc-toolset-14-gcc-gfortran rhel9-AppStream RHEL 9.5 gcc-toolset-14-gcc-plugin-annobin rhel9-AppStream RHEL 9.5 gcc-toolset-14-gcc-plugin-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libasan-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libatomic-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libgccjit rhel9-AppStream RHEL 9.5 gcc-toolset-14-libgccjit-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libitm-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-liblsan-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libquadmath-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libstdc++-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libstdc++-docs rhel9-AppStream RHEL 9.5 gcc-toolset-14-libtsan-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-libubsan-devel rhel9-AppStream RHEL 9.5 gcc-toolset-14-offload-nvptx rhel9-AppStream RHEL 9.5 gcc-toolset-14-runtime rhel9-AppStream RHEL 9.5 gcr-base rhel9-AppStream RHEL 9.0 gdb-minimal rhel9-AppStream RHEL 9.0 gedit-plugin-sessionsaver rhel9-AppStream RHEL 9.0 gedit-plugin-synctex rhel9-AppStream RHEL 9.0 gegl04-devel-docs rhel9-AppStream RHEL 9.0 gegl04-tools rhel9-AppStream RHEL 9.0 glade rhel9-AppStream RHEL 9.0 glibc-doc rhel9-AppStream RHEL 9.0 glibc-langpack-ckb rhel9-BaseOS RHEL 9.0 glibc-langpack-mnw rhel9-BaseOS RHEL 9.0 glslang rhel9-AppStream RHEL 9.0 glslang-devel rhel9-CRB RHEL 9.1 glslc rhel9-AppStream RHEL 9.0 glusterfs-cloudsync-plugins rhel9-AppStream RHEL 9.0 gnome-connections rhel9-AppStream RHEL 9.0 gnome-devel-docs rhel9-AppStream RHEL 9.0 gnome-extensions-app rhel9-AppStream RHEL 9.0 gnome-kiosk rhel9-AppStream RHEL 9.0 gnome-kiosk-script-session rhel9-AppStream RHEL 9.1 gnome-kiosk-search-appliance rhel9-AppStream RHEL 9.1 gnome-shell-extension-background-logo rhel9-AppStream RHEL 9.0 gnome-shell-extension-custom-menu rhel9-AppStream RHEL 9.3 gnome-shell-extension-dash-to-panel rhel9-AppStream RHEL 9.4 gnome-software-devel rhel9-CRB RHEL 9.3 gnome-themes-extra rhel9-AppStream RHEL 9.0 gnome-tour rhel9-AppStream RHEL 9.0 gnu-efi-compat rhel9-CRB RHEL 9.0 go-filesystem rhel9-AppStream RHEL 9.0 go-rpm-macros rhel9-AppStream RHEL 9.0 go-rpm-templates rhel9-AppStream RHEL 9.0 golang-github-cpuguy83-md2man rhel9-CRB RHEL 9.2 golang-race rhel9-AppStream RHEL 9.5 google-carlito-fonts rhel9-AppStream RHEL 9.0 google-crosextra-caladea-fonts rhel9-AppStream RHEL 9.3 google-noto-kufi-arabic-fonts rhel9-CRB RHEL 9.5 google-noto-kufi-arabic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-music-fonts rhel9-CRB RHEL 9.5 google-noto-naskh-arabic-fonts rhel9-CRB RHEL 9.5 google-noto-naskh-arabic-ui-fonts rhel9-CRB RHEL 9.5 google-noto-naskh-arabic-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-naskh-arabic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-nastaliq-urdu-fonts rhel9-CRB RHEL 9.5 google-noto-rashi-hebrew-fonts rhel9-CRB RHEL 9.5 google-noto-rashi-hebrew-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-adlam-fonts rhel9-CRB RHEL 9.5 google-noto-sans-adlam-unjoined-fonts rhel9-CRB RHEL 9.5 google-noto-sans-adlam-unjoined-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-adlam-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-anatolian-hieroglyphs-fonts rhel9-CRB RHEL 9.5 google-noto-sans-anatolian-hieroglyphs-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-arabic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-arabic-ui-fonts rhel9-CRB RHEL 9.5 google-noto-sans-arabic-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-arabic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-armenian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-avestan-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-balinese-fonts rhel9-CRB RHEL 9.5 google-noto-sans-balinese-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-bamum-fonts rhel9-CRB RHEL 9.5 google-noto-sans-bamum-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-bassa-vah-fonts rhel9-CRB RHEL 9.5 google-noto-sans-bassa-vah-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-batak-fonts rhel9-CRB RHEL 9.5 google-noto-sans-bengali-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-bengali-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-bhaiksuki-fonts rhel9-CRB RHEL 9.5 google-noto-sans-buginese-fonts rhel9-CRB RHEL 9.5 google-noto-sans-buginese-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-buhid-fonts rhel9-CRB RHEL 9.5 google-noto-sans-buhid-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-canadian-aboriginal-fonts rhel9-CRB RHEL 9.5 google-noto-sans-canadian-aboriginal-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-carian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-caucasian-albanian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-chakma-fonts rhel9-CRB RHEL 9.5 google-noto-sans-cham-fonts rhel9-CRB RHEL 9.5 google-noto-sans-cham-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-cherokee-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-cuneiform-fonts rhel9-CRB RHEL 9.5 google-noto-sans-cuneiform-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-cypriot-fonts rhel9-CRB RHEL 9.5 google-noto-sans-cypriot-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-deseret-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-devanagari-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-devanagari-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-display-fonts rhel9-CRB RHEL 9.5 google-noto-sans-display-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-duployan-fonts rhel9-CRB RHEL 9.5 google-noto-sans-egyptian-hieroglyphs-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-elbasan-fonts rhel9-CRB RHEL 9.5 google-noto-sans-elymaic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-elymaic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-ethiopic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-georgian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-gothic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-gothic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-grantha-fonts rhel9-CRB RHEL 9.5 google-noto-sans-gunjala-gondi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-gurmukhi-ui-fonts rhel9-CRB RHEL 9.5 google-noto-sans-gurmukhi-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-gurmukhi-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-hanifi-rohingya-fonts rhel9-CRB RHEL 9.5 google-noto-sans-hanifi-rohingya-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-hanunoo-fonts rhel9-CRB RHEL 9.5 google-noto-sans-hatran-fonts rhel9-CRB RHEL 9.5 google-noto-sans-hatran-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-hebrew-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-imperial-aramaic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-indic-siyaq-numbers-fonts rhel9-CRB RHEL 9.5 google-noto-sans-inscriptional-pahlavi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-inscriptional-parthian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-javanese-fonts rhel9-CRB RHEL 9.5 google-noto-sans-kannada-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-kannada-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-kayah-li-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-khmer-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-khmer-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-khojki-fonts rhel9-CRB RHEL 9.5 google-noto-sans-khudawadi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lao-looped-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lao-looped-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lao-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lao-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lepcha-fonts rhel9-CRB RHEL 9.5 google-noto-sans-limbu-fonts rhel9-CRB RHEL 9.5 google-noto-sans-linear-a-fonts rhel9-CRB RHEL 9.5 google-noto-sans-linear-a-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-linear-b-fonts rhel9-CRB RHEL 9.5 google-noto-sans-linear-b-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lisu-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lisu-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lycian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-lydian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mahajani-fonts rhel9-CRB RHEL 9.5 google-noto-sans-malayalam-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-malayalam-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mandaic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mandaic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-manichaean-fonts rhel9-CRB RHEL 9.5 google-noto-sans-marchen-fonts rhel9-CRB RHEL 9.5 google-noto-sans-marchen-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-masaram-gondi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-math-fonts rhel9-CRB RHEL 9.5 google-noto-sans-math-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mayan-numerals-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mayan-numerals-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-medefaidrin-fonts rhel9-CRB RHEL 9.5 google-noto-sans-medefaidrin-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-meetei-mayek-fonts rhel9-CRB RHEL 9.5 google-noto-sans-meeteimayek-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mende-kikakui-fonts rhel9-CRB RHEL 9.5 google-noto-sans-meroitic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-miao-fonts rhel9-CRB RHEL 9.5 google-noto-sans-modi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mongolian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mono-fonts rhel9-AppStream RHEL 9.0 google-noto-sans-mono-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mro-fonts rhel9-CRB RHEL 9.5 google-noto-sans-mro-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-multani-fonts rhel9-CRB RHEL 9.5 google-noto-sans-multani-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-myanmar-fonts rhel9-CRB RHEL 9.5 google-noto-sans-myanmar-ui-fonts rhel9-CRB RHEL 9.5 google-noto-sans-myanmar-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-myanmar-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-nabataean-fonts rhel9-CRB RHEL 9.5 google-noto-sans-nabataean-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-new-tai-lue-fonts rhel9-CRB RHEL 9.5 google-noto-sans-new-tai-lue-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-newa-fonts rhel9-CRB RHEL 9.5 google-noto-sans-nushu-fonts rhel9-CRB RHEL 9.5 google-noto-sans-ogham-fonts rhel9-CRB RHEL 9.5 google-noto-sans-ogham-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-ol-chiki-fonts rhel9-CRB RHEL 9.5 google-noto-sans-ol-chiki-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-old-hungarian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-old-italic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-old-north-arabian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-old-permic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-old-persian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-old-sogdian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-oriya-fonts rhel9-CRB RHEL 9.5 google-noto-sans-oriya-ui-fonts rhel9-CRB RHEL 9.5 google-noto-sans-osage-fonts rhel9-CRB RHEL 9.5 google-noto-sans-osmanya-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-pahawh-hmong-fonts rhel9-CRB RHEL 9.5 google-noto-sans-palmyrene-fonts rhel9-CRB RHEL 9.5 google-noto-sans-pau-cin-hau-fonts rhel9-CRB RHEL 9.5 google-noto-sans-phags-pa-fonts rhel9-CRB RHEL 9.5 google-noto-sans-phoenician-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-psalter-pahlavi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-rejang-fonts rhel9-CRB RHEL 9.5 google-noto-sans-runic-fonts rhel9-CRB RHEL 9.5 google-noto-sans-runic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-samaritan-fonts rhel9-CRB RHEL 9.5 google-noto-sans-saurashtra-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sharada-fonts rhel9-CRB RHEL 9.5 google-noto-sans-shavian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-siddham-fonts rhel9-CRB RHEL 9.5 google-noto-sans-signwriting-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sinhala-ui-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sinhala-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sinhala-vf-fonts rhel9-AppStream RHEL 9.0 google-noto-sans-sogdian-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sora-sompeng-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sora-sompeng-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-soyombo-fonts rhel9-CRB RHEL 9.5 google-noto-sans-soyombo-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sundanese-fonts rhel9-CRB RHEL 9.5 google-noto-sans-sundanese-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-syloti-nagri-fonts rhel9-CRB RHEL 9.5 google-noto-sans-symbols-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-symbols2-fonts rhel9-CRB RHEL 9.1 google-noto-sans-syriac-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tagalog-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tagbanwa-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tagbanwa-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tai-le-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tai-tham-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tai-tham-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tai-viet-fonts rhel9-CRB RHEL 9.5 google-noto-sans-takri-fonts rhel9-CRB RHEL 9.5 google-noto-sans-takri-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tamil-supplement-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tamil-supplement-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tamil-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tamil-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-telugu-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-telugu-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-thaana-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-thai-looped-fonts rhel9-CRB RHEL 9.5 google-noto-sans-thai-ui-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-thai-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-adrar-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-agraw-imazighen-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-ahaggar-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-air-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-apt-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-azawagh-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-ghat-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-hawad-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-rhissa-ixa-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-sil-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tifinagh-tawellemmet-fonts rhel9-CRB RHEL 9.5 google-noto-sans-tirhuta-fonts rhel9-CRB RHEL 9.5 google-noto-sans-ugaritic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-vai-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-wancho-fonts rhel9-CRB RHEL 9.5 google-noto-sans-wancho-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-warang-citi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-warang-citi-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-yi-fonts rhel9-CRB RHEL 9.5 google-noto-sans-yi-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sans-zanabazar-square-fonts rhel9-CRB RHEL 9.5 google-noto-sans-zanabazar-square-vf-fonts rhel9-CRB RHEL 9.5 google-noto-sansthai-looped-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-ahom-fonts rhel9-CRB RHEL 9.5 google-noto-serif-armenian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-balinese-fonts rhel9-CRB RHEL 9.5 google-noto-serif-bengali-fonts rhel9-CRB RHEL 9.5 google-noto-serif-bengali-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-devanagari-fonts rhel9-CRB RHEL 9.5 google-noto-serif-devanagari-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-display-fonts rhel9-CRB RHEL 9.5 google-noto-serif-display-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-dogra-fonts rhel9-CRB RHEL 9.5 google-noto-serif-ethiopic-fonts rhel9-CRB RHEL 9.5 google-noto-serif-ethiopic-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-georgian-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-grantha-fonts rhel9-CRB RHEL 9.5 google-noto-serif-gujarati-fonts rhel9-CRB RHEL 9.5 google-noto-serif-gujarati-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-gurmukhi-fonts rhel9-CRB RHEL 9.5 google-noto-serif-gurmukhi-vf-fonts rhel9-AppStream RHEL 9.0 google-noto-serif-hebrew-fonts rhel9-CRB RHEL 9.5 google-noto-serif-hebrew-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-kannada-fonts rhel9-CRB RHEL 9.5 google-noto-serif-kannada-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-khmer-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-khojki-fonts rhel9-CRB RHEL 9.5 google-noto-serif-khojki-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-lao-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-malayalam-fonts rhel9-CRB RHEL 9.5 google-noto-serif-malayalam-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-myanmar-fonts rhel9-CRB RHEL 9.5 google-noto-serif-nyiakeng-puachue-hmong-fonts rhel9-CRB RHEL 9.5 google-noto-serif-nyiakeng-puachue-hmong-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-sinhala-fonts rhel9-CRB RHEL 9.5 google-noto-serif-sinhala-vf-fonts rhel9-AppStream RHEL 9.0 google-noto-serif-tamil-fonts rhel9-CRB RHEL 9.5 google-noto-serif-tamil-slanted-fonts rhel9-CRB RHEL 9.5 google-noto-serif-tamil-slanted-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-tamil-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-tangut-fonts rhel9-CRB RHEL 9.5 google-noto-serif-tangut-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-telugu-fonts rhel9-CRB RHEL 9.5 google-noto-serif-telugu-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-thai-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-tibetan-fonts rhel9-CRB RHEL 9.5 google-noto-serif-tibetan-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-vf-fonts rhel9-CRB RHEL 9.5 google-noto-serif-yezidi-fonts rhel9-CRB RHEL 9.5 google-noto-serif-yezidi-vf-fonts rhel9-CRB RHEL 9.5 google-noto-traditional-nushu-fonts rhel9-CRB RHEL 9.5 gpsd-minimal rhel9-AppStream RHEL 9.3 gpsd-minimal-clients rhel9-AppStream RHEL 9.3 grafana-selinux rhel9-AppStream RHEL 9.4 graphene rhel9-AppStream RHEL 9.0 graphene-devel rhel9-AppStream RHEL 9.0 graphviz-ruby rhel9-AppStream RHEL 9.4 gstreamer1-plugins-base-tools rhel9-AppStream RHEL 9.2 gstreamer1-rtsp-server rhel9-AppStream RHEL 9.3 gtk-vnc2-devel rhel9-CRB RHEL 9.4 gtk3-devel-docs rhel9-CRB RHEL 9.1 gtk4 rhel9-AppStream RHEL 9.0 gtk4-devel rhel9-AppStream RHEL 9.0 gtksourceview4 rhel9-AppStream RHEL 9.0 gtksourceview4-devel rhel9-CRB RHEL 9.1 guestfs-tools rhel9-AppStream RHEL 9.0 gvisor-tap-vsock rhel9-AppStream RHEL 9.4 gvnc-devel rhel9-CRB RHEL 9.4 ha-cloud-support rhel9-HighAvailability RHEL 9.0 ha-openstack-support rhel9-AppStream RHEL 9.0 highcontrast-icon-theme rhel9-AppStream RHEL 9.0 hivex-libs rhel9-AppStream RHEL 9.0 ht-caladea-fonts rhel9-AppStream RHEL 9.0 httpd-core rhel9-AppStream RHEL 9.1 hunspell-filesystem rhel9-AppStream RHEL 9.0 hwdata-devel rhel9-CRB RHEL 9.3 hyphen-eo rhel9-AppStream RHEL 9.0 ibus-anthy rhel9-AppStream RHEL 9.0 ibus-anthy-python rhel9-AppStream RHEL 9.0 ibus-gtk4 rhel9-CRB RHEL 9.5 idm-jss rhel9-AppStream RHEL 9.1 idm-jss-tomcat rhel9-AppStream RHEL 9.4 idm-ldapjdk rhel9-AppStream RHEL 9.1 idm-pki-acme rhel9-AppStream RHEL 9.1 idm-pki-base rhel9-AppStream RHEL 9.1 idm-pki-ca rhel9-AppStream RHEL 9.1 idm-pki-est rhel9-AppStream RHEL 9.2 idm-pki-java rhel9-AppStream RHEL 9.1 idm-pki-kra rhel9-AppStream RHEL 9.1 idm-pki-server rhel9-AppStream RHEL 9.1 idm-pki-tools rhel9-AppStream RHEL 9.1 idm-tomcatjss rhel9-AppStream RHEL 9.1 ignition rhel9-AppStream RHEL 9.0 ignition-edge rhel9-AppStream RHEL 9.2 ignition-validate rhel9-AppStream RHEL 9.2 imath rhel9-AppStream RHEL 9.0 imath-devel rhel9-CRB RHEL 9.0 inih rhel9-BaseOS RHEL 9.0 inih-devel rhel9-CRB RHEL 9.1 initscripts-rename-device rhel9-BaseOS RHEL 9.0 initscripts-service rhel9-BaseOS RHEL 9.0 insights-client-ros rhel9-AppStream RHEL 9.5 intel-lpmd rhel9-AppStream RHEL 9.5 ipa-selinux-luna rhel9-AppStream RHEL 9.5 ipa-selinux-nfast rhel9-AppStream RHEL 9.5 iptables-nft rhel9-BaseOS RHEL 9.0 iptables-nft-services rhel9-AppStream RHEL 9.0 jakarta-activation rhel9-AppStream RHEL 9.0 jakarta-activation2 rhel9-AppStream RHEL 9.2 jakarta-annotations rhel9-AppStream RHEL 9.0 jakarta-mail rhel9-AppStream RHEL 9.0 jakarta-servlet rhel9-CRB RHEL 9.0 jasper rhel9-AppStream RHEL 9.0 jasper-utils rhel9-AppStream RHEL 9.0 java-21-openjdk rhel9-AppStream RHEL 9.3 java-21-openjdk-demo rhel9-AppStream RHEL 9.3 java-21-openjdk-demo-fastdebug rhel9-CRB RHEL 9.3 java-21-openjdk-demo-slowdebug rhel9-CRB RHEL 9.3 java-21-openjdk-devel rhel9-AppStream RHEL 9.3 java-21-openjdk-devel-fastdebug rhel9-CRB RHEL 9.3 java-21-openjdk-devel-slowdebug rhel9-CRB RHEL 9.3 java-21-openjdk-fastdebug rhel9-CRB RHEL 9.3 java-21-openjdk-headless rhel9-AppStream RHEL 9.3 java-21-openjdk-headless-fastdebug rhel9-CRB RHEL 9.3 java-21-openjdk-headless-slowdebug rhel9-CRB RHEL 9.3 java-21-openjdk-javadoc rhel9-AppStream RHEL 9.3 java-21-openjdk-javadoc-zip rhel9-AppStream RHEL 9.3 java-21-openjdk-jmods rhel9-AppStream RHEL 9.3 java-21-openjdk-jmods-fastdebug rhel9-CRB RHEL 9.3 java-21-openjdk-jmods-slowdebug rhel9-CRB RHEL 9.3 java-21-openjdk-slowdebug rhel9-CRB RHEL 9.3 java-21-openjdk-src rhel9-AppStream RHEL 9.3 java-21-openjdk-src-fastdebug rhel9-CRB RHEL 9.3 java-21-openjdk-src-slowdebug rhel9-CRB RHEL 9.3 java-21-openjdk-static-libs rhel9-AppStream RHEL 9.3 java-21-openjdk-static-libs-fastdebug rhel9-CRB RHEL 9.3 java-21-openjdk-static-libs-slowdebug rhel9-CRB RHEL 9.3 javapackages-generators rhel9-CRB RHEL 9.0 jaxb-api rhel9-AppStream RHEL 9.0 jaxb-api4 rhel9-AppStream RHEL 9.2 jaxb-codemodel rhel9-AppStream RHEL 9.2 jaxb-core rhel9-AppStream RHEL 9.2 jaxb-dtd-parser rhel9-AppStream RHEL 9.2 jaxb-istack-commons-runtime rhel9-AppStream RHEL 9.2 jaxb-istack-commons-tools rhel9-AppStream RHEL 9.2 jaxb-relaxng-datatype rhel9-AppStream RHEL 9.2 jaxb-rngom rhel9-AppStream RHEL 9.2 jaxb-runtime rhel9-AppStream RHEL 9.2 jaxb-txw2 rhel9-AppStream RHEL 9.2 jaxb-xjc rhel9-AppStream RHEL 9.2 jaxb-xsom rhel9-AppStream RHEL 9.2 jbigkit rhel9-AppStream RHEL 9.0 jbig2dec-devel rhel9-CRB RHEL 9.2 jigawatts-javadoc rhel9-AppStream RHEL 9.0 jitterentropy rhel9-BaseOS RHEL 9.0 jitterentropy-devel rhel9-CRB RHEL 9.0 jmc rhel9-CRB RHEL 9.2 jna-contrib rhel9-AppStream RHEL 9.0 kasumi-common rhel9-AppStream RHEL 9.0 kasumi-unicode rhel9-AppStream RHEL 9.0 kernel-debug-devel-matched rhel9-AppStream RHEL 9.0 kernel-devel-matched rhel9-AppStream RHEL 9.0 kernel-debug-modules-core rhel9-BaseOS RHEL 9.2 kernel-debug-uki-virt rhel9-BaseOS RHEL 9.2 kernel-modules-core rhel9-BaseOS RHEL 9.2 kernel-rt-debug-modules-core rhel9-NFV RHEL 9.2 kernel-rt-modules-core rhel9-NFV RHEL 9.2 kernel-srpm-macros rhel9-AppStream RHEL 9.0 kernel-uki-virt rhel9-BaseOS RHEL 9.2 kernel-uki-virt-addons rhel9-BaseOS RHEL 9.5 keylime rhel9-AppStream RHEL 9.1 keylime-agent-rust rhel9-AppStream RHEL 9.1 keylime-base rhel9-AppStream RHEL 9.1 keylime-registrar rhel9-AppStream RHEL 9.1 keylime-selinux rhel9-AppStream RHEL 9.1 keylime-tenant rhel9-AppStream RHEL 9.1 keylime-verifier rhel9-AppStream RHEL 9.1 khmer-os-battambang-fonts rhel9-AppStream RHEL 9.0 khmer-os-bokor-fonts rhel9-AppStream RHEL 9.0 khmer-os-content-fonts rhel9-AppStream RHEL 9.0 khmer-os-fasthand-fonts rhel9-AppStream RHEL 9.0 khmer-os-freehand-fonts rhel9-AppStream RHEL 9.0 khmer-os-handwritten-fonts rhel9-AppStream RHEL 9.0 khmer-os-metal-chrieng-fonts rhel9-AppStream RHEL 9.0 khmer-os-muol-fonts rhel9-AppStream RHEL 9.0 khmer-os-muol-fonts-all rhel9-AppStream RHEL 9.0 khmer-os-muol-pali-fonts rhel9-AppStream RHEL 9.0 khmer-os-siemreap-fonts rhel9-AppStream RHEL 9.0 khmer-os-system-fonts rhel9-AppStream RHEL 9.0 ksmtuned rhel9-AppStream RHEL 9.0 ktls-utils rhel9-BaseOS RHEL 9.5 lame rhel9-AppStream RHEL 9.0 langpacks-bo rhel9-AppStream RHEL 9.0 langpacks-core-af rhel9-AppStream RHEL 9.0 langpacks-core-am rhel9-AppStream RHEL 9.0 langpacks-core-ar rhel9-AppStream RHEL 9.0 langpacks-core-as rhel9-AppStream RHEL 9.0 langpacks-core-ast rhel9-AppStream RHEL 9.0 langpacks-core-be rhel9-AppStream RHEL 9.0 langpacks-core-bg rhel9-AppStream RHEL 9.0 langpacks-core-bn rhel9-AppStream RHEL 9.0 langpacks-core-bo rhel9-AppStream RHEL 9.0 langpacks-core-br rhel9-AppStream RHEL 9.0 langpacks-core-bs rhel9-AppStream RHEL 9.0 langpacks-core-ca rhel9-AppStream RHEL 9.0 langpacks-core-cs rhel9-AppStream RHEL 9.0 langpacks-core-cy rhel9-AppStream RHEL 9.0 langpacks-core-da rhel9-AppStream RHEL 9.0 langpacks-core-de rhel9-AppStream RHEL 9.0 langpacks-core-dz rhel9-AppStream RHEL 9.0 langpacks-core-el rhel9-AppStream RHEL 9.0 langpacks-core-en rhel9-AppStream RHEL 9.0 langpacks-core-en_GB rhel9-AppStream RHEL 9.0 langpacks-core-eo rhel9-AppStream RHEL 9.0 langpacks-core-es rhel9-AppStream RHEL 9.0 langpacks-core-et rhel9-AppStream RHEL 9.0 langpacks-core-eu rhel9-AppStream RHEL 9.0 langpacks-core-fa rhel9-AppStream RHEL 9.0 langpacks-core-fi rhel9-AppStream RHEL 9.0 langpacks-core-font-af rhel9-AppStream RHEL 9.0 langpacks-core-font-am rhel9-AppStream RHEL 9.0 langpacks-core-font-ar rhel9-AppStream RHEL 9.0 langpacks-core-font-as rhel9-AppStream RHEL 9.0 langpacks-core-font-ast rhel9-AppStream RHEL 9.0 langpacks-core-font-be rhel9-AppStream RHEL 9.0 langpacks-core-font-bg rhel9-AppStream RHEL 9.0 langpacks-core-font-bn rhel9-AppStream RHEL 9.0 langpacks-core-font-bo rhel9-AppStream RHEL 9.0 langpacks-core-font-br rhel9-AppStream RHEL 9.0 langpacks-core-font-bs rhel9-AppStream RHEL 9.0 langpacks-core-font-ca rhel9-AppStream RHEL 9.0 langpacks-core-font-cs rhel9-AppStream RHEL 9.0 langpacks-core-font-cy rhel9-AppStream RHEL 9.0 langpacks-core-font-da rhel9-AppStream RHEL 9.0 langpacks-core-font-de rhel9-AppStream RHEL 9.0 langpacks-core-font-dz rhel9-AppStream RHEL 9.0 langpacks-core-font-el rhel9-AppStream RHEL 9.0 langpacks-core-font-en rhel9-AppStream RHEL 9.0 langpacks-core-font-eo rhel9-AppStream RHEL 9.0 langpacks-core-font-es rhel9-AppStream RHEL 9.0 langpacks-core-font-et rhel9-AppStream RHEL 9.0 langpacks-core-font-eu rhel9-AppStream RHEL 9.0 langpacks-core-font-fa rhel9-AppStream RHEL 9.0 langpacks-core-font-fi rhel9-AppStream RHEL 9.0 langpacks-core-font-fr rhel9-AppStream RHEL 9.0 langpacks-core-font-ga rhel9-AppStream RHEL 9.0 langpacks-core-font-gl rhel9-AppStream RHEL 9.0 langpacks-core-font-gu rhel9-AppStream RHEL 9.0 langpacks-core-font-he rhel9-AppStream RHEL 9.0 langpacks-core-font-hi rhel9-AppStream RHEL 9.0 langpacks-core-font-hr rhel9-AppStream RHEL 9.0 langpacks-core-font-hu rhel9-AppStream RHEL 9.0 langpacks-core-font-ia rhel9-AppStream RHEL 9.0 langpacks-core-font-id rhel9-AppStream RHEL 9.0 langpacks-core-font-is rhel9-AppStream RHEL 9.0 langpacks-core-font-it rhel9-AppStream RHEL 9.0 langpacks-core-font-ja rhel9-AppStream RHEL 9.0 langpacks-core-font-ka rhel9-AppStream RHEL 9.0 langpacks-core-font-kk rhel9-AppStream RHEL 9.0 langpacks-core-font-km rhel9-AppStream RHEL 9.0 langpacks-core-font-kn rhel9-AppStream RHEL 9.0 langpacks-core-font-ko rhel9-AppStream RHEL 9.0 langpacks-core-font-ku rhel9-AppStream RHEL 9.0 langpacks-core-font-lt rhel9-AppStream RHEL 9.0 langpacks-core-font-lv rhel9-AppStream RHEL 9.0 langpacks-core-font-mai rhel9-AppStream RHEL 9.0 langpacks-core-font-mk rhel9-AppStream RHEL 9.0 langpacks-core-font-ml rhel9-AppStream RHEL 9.0 langpacks-core-font-mr rhel9-AppStream RHEL 9.0 langpacks-core-font-ms rhel9-AppStream RHEL 9.0 langpacks-core-font-my rhel9-AppStream RHEL 9.0 langpacks-core-font-nb rhel9-AppStream RHEL 9.0 langpacks-core-font-ne rhel9-AppStream RHEL 9.0 langpacks-core-font-nl rhel9-AppStream RHEL 9.0 langpacks-core-font-nn rhel9-AppStream RHEL 9.0 langpacks-core-font-nr rhel9-AppStream RHEL 9.0 langpacks-core-font-nso rhel9-AppStream RHEL 9.0 langpacks-core-font-or rhel9-AppStream RHEL 9.0 langpacks-core-font-pa rhel9-AppStream RHEL 9.0 langpacks-core-font-pl rhel9-AppStream RHEL 9.0 langpacks-core-font-pt rhel9-AppStream RHEL 9.0 langpacks-core-font-ro rhel9-AppStream RHEL 9.0 langpacks-core-font-ru rhel9-AppStream RHEL 9.0 langpacks-core-font-si rhel9-AppStream RHEL 9.0 langpacks-core-font-sk rhel9-AppStream RHEL 9.0 langpacks-core-font-sl rhel9-AppStream RHEL 9.0 langpacks-core-font-sq rhel9-AppStream RHEL 9.0 langpacks-core-font-sr rhel9-AppStream RHEL 9.0 langpacks-core-font-ss rhel9-AppStream RHEL 9.0 langpacks-core-font-sv rhel9-AppStream RHEL 9.0 langpacks-core-font-ta rhel9-AppStream RHEL 9.0 langpacks-core-font-te rhel9-AppStream RHEL 9.0 langpacks-core-font-th rhel9-AppStream RHEL 9.0 langpacks-core-font-tn rhel9-AppStream RHEL 9.0 langpacks-core-font-tr rhel9-AppStream RHEL 9.0 langpacks-core-font-ts rhel9-AppStream RHEL 9.0 langpacks-core-font-uk rhel9-AppStream RHEL 9.0 langpacks-core-font-ur rhel9-AppStream RHEL 9.0 langpacks-core-font-ve rhel9-AppStream RHEL 9.0 langpacks-core-font-vi rhel9-AppStream RHEL 9.0 langpacks-core-font-xh rhel9-AppStream RHEL 9.0 langpacks-core-font-yi rhel9-AppStream RHEL 9.0 langpacks-core-font-zh_CN rhel9-AppStream RHEL 9.0 langpacks-core-font-zh_HK rhel9-AppStream RHEL 9.0 langpacks-core-font-zh_TW rhel9-AppStream RHEL 9.0 langpacks-core-font-zu rhel9-AppStream RHEL 9.0 langpacks-core-fr rhel9-AppStream RHEL 9.0 langpacks-core-ga rhel9-AppStream RHEL 9.0 langpacks-core-gl rhel9-AppStream RHEL 9.0 langpacks-core-gu rhel9-AppStream RHEL 9.0 langpacks-core-he rhel9-AppStream RHEL 9.0 langpacks-core-hi rhel9-AppStream RHEL 9.0 langpacks-core-hr rhel9-AppStream RHEL 9.0 langpacks-core-hu rhel9-AppStream RHEL 9.0 langpacks-core-ia rhel9-AppStream RHEL 9.0 langpacks-core-id rhel9-AppStream RHEL 9.0 langpacks-core-is rhel9-AppStream RHEL 9.0 langpacks-core-it rhel9-AppStream RHEL 9.0 langpacks-core-ja rhel9-AppStream RHEL 9.0 langpacks-core-ka rhel9-AppStream RHEL 9.0 langpacks-core-kk rhel9-AppStream RHEL 9.0 langpacks-core-km rhel9-AppStream RHEL 9.0 langpacks-core-kn rhel9-AppStream RHEL 9.0 langpacks-core-ko rhel9-AppStream RHEL 9.0 langpacks-core-ku rhel9-AppStream RHEL 9.0 langpacks-core-lt rhel9-AppStream RHEL 9.0 langpacks-core-lv rhel9-AppStream RHEL 9.0 langpacks-core-mai rhel9-AppStream RHEL 9.0 langpacks-core-mk rhel9-AppStream RHEL 9.0 langpacks-core-ml rhel9-AppStream RHEL 9.0 langpacks-core-mr rhel9-AppStream RHEL 9.0 langpacks-core-ms rhel9-AppStream RHEL 9.0 langpacks-core-my rhel9-AppStream RHEL 9.0 langpacks-core-nb rhel9-AppStream RHEL 9.0 langpacks-core-ne rhel9-AppStream RHEL 9.0 langpacks-core-nl rhel9-AppStream RHEL 9.0 langpacks-core-nn rhel9-AppStream RHEL 9.0 langpacks-core-nr rhel9-AppStream RHEL 9.0 langpacks-core-nso rhel9-AppStream RHEL 9.0 langpacks-core-or rhel9-AppStream RHEL 9.0 langpacks-core-pa rhel9-AppStream RHEL 9.0 langpacks-core-pl rhel9-AppStream RHEL 9.0 langpacks-core-pt rhel9-AppStream RHEL 9.0 langpacks-core-pt_BR rhel9-AppStream RHEL 9.0 langpacks-core-ro rhel9-AppStream RHEL 9.0 langpacks-core-ru rhel9-AppStream RHEL 9.0 langpacks-core-si rhel9-AppStream RHEL 9.0 langpacks-core-sk rhel9-AppStream RHEL 9.0 langpacks-core-sl rhel9-AppStream RHEL 9.0 langpacks-core-sq rhel9-AppStream RHEL 9.0 langpacks-core-sr rhel9-AppStream RHEL 9.0 langpacks-core-ss rhel9-AppStream RHEL 9.0 langpacks-core-sv rhel9-AppStream RHEL 9.0 langpacks-core-ta rhel9-AppStream RHEL 9.0 langpacks-core-te rhel9-AppStream RHEL 9.0 langpacks-core-th rhel9-AppStream RHEL 9.0 langpacks-core-tn rhel9-AppStream RHEL 9.0 langpacks-core-tr rhel9-AppStream RHEL 9.0 langpacks-core-ts rhel9-AppStream RHEL 9.0 langpacks-core-uk rhel9-AppStream RHEL 9.0 langpacks-core-ur rhel9-AppStream RHEL 9.0 langpacks-core-ve rhel9-AppStream RHEL 9.0 langpacks-core-vi rhel9-AppStream RHEL 9.0 langpacks-core-xh rhel9-AppStream RHEL 9.0 langpacks-core-yi rhel9-AppStream RHEL 9.0 langpacks-core-zh_CN rhel9-AppStream RHEL 9.0 langpacks-core-zh_HK rhel9-AppStream RHEL 9.0 langpacks-core-zh_TW rhel9-AppStream RHEL 9.0 langpacks-core-zu rhel9-AppStream RHEL 9.0 langpacks-dz rhel9-AppStream RHEL 9.0 langpacks-eo rhel9-AppStream RHEL 9.0 langpacks-ka rhel9-AppStream RHEL 9.0 langpacks-km rhel9-AppStream RHEL 9.0 langpacks-ku rhel9-AppStream RHEL 9.0 langpacks-my rhel9-AppStream RHEL 9.0 langpacks-yi rhel9-AppStream RHEL 9.0 langpacks-zh_HK rhel9-AppStream RHEL 9.0 lapack64 rhel9-CRB RHEL 9.3 lapack64_ rhel9-CRB RHEL 9.0 ldns-doc rhel9-CRB RHEL 9.1 ldns-utils rhel9-CRB RHEL 9.1 ledmon-devel rhel9-CRB RHEL 9.5 ledmon-libs rhel9-BaseOS RHEL 9.5 liba52-devel rhel9-CRB RHEL 9.0 libabigail rhel9-CRB RHEL 9.2 libadwaita rhel9-AppStream RHEL 9.4 libadwaita-devel rhel9-CRB RHEL 9.4 libasan8 rhel9-AppStream RHEL 9.1 libblkio rhel9-AppStream RHEL 9.3 libblkio-devel rhel9-CRB RHEL 9.3 libblockdev-nvme rhel9-AppStream RHEL 9.2 libblockdev-tools rhel9-AppStream RHEL 9.0 libbpf-tools rhel9-AppStream RHEL 9.0 libbrotli rhel9-BaseOS RHEL 9.0 libburn-doc rhel9-AppStream RHEL 9.0 libbytesize-devel rhel9-CRB RHEL 9.5 libcbor rhel9-BaseOS RHEL 9.0 libcdr-devel rhel9-CRB RHEL 9.2 libdecor rhel9-AppStream RHEL 9.0 libdecor-devel rhel9-CRB RHEL 9.0 libdhash-devel rhel9-CRB RHEL 9.1 libdnf-plugin-subscription-manager rhel9-BaseOS RHEL 9.0 libdvdnav-devel rhel9-CRB RHEL 9.2 libeconf rhel9-BaseOS RHEL 9.0 libell rhel9-AppStream RHEL 9.0 libestr-devel rhel9-CRB RHEL 9.1 libev-devel rhel9-CRB RHEL 9.5 libfastjson-devel rhel9-CRB RHEL 9.3 libfdt-static rhel9-CRB RHEL 9.1 libfido2 rhel9-BaseOS RHEL 9.0 libfido2-devel rhel9-CRB RHEL 9.0 libfl-static rhel9-CRB RHEL 9.0 libfprint-devel rhel9-CRB RHEL 9.5 libfreehand-devel rhel9-CRB RHEL 9.2 libgccjit rhel9-AppStream RHEL 9.0 libgccjit-devel rhel9-AppStream RHEL 9.0 libgpiod rhel9-AppStream RHEL 9.1 libgpiod-c++ rhel9-AppStream RHEL 9.1 libgpiod-devel rhel9-AppStream RHEL 9.1 libgpiod-utils rhel9-AppStream RHEL 9.1 libhandy rhel9-AppStream RHEL 9.0 libi2c-devel rhel9-CRB RHEL 9.1 libi2cd rhel9-AppStream RHEL 9.1 libi2cd-devel rhel9-AppStream RHEL 9.1 libical-glib rhel9-AppStream RHEL 9.0 libical-glib-devel rhel9-AppStream RHEL 9.0 libiptcdata-devel rhel9-CRB RHEL 9.5 libisoburn-doc rhel9-AppStream RHEL 9.0 libisofs-doc rhel9-AppStream RHEL 9.0 libjcat rhel9-BaseOS RHEL 9.0 libjcat-devel rhel9-CRB RHEL 9.0 libkdumpfile rhel9-AppStream RHEL 9.4 libkdumpfile-devel rhel9-AppStream RHEL 9.5 libknet1-compress-zstd-plugin rhel9-HighAvailability RHEL 9.0 libldac rhel9-AppStream RHEL 9.0 liblognorm-devel rhel9-CRB RHEL 9.3 libmemcached-awesome rhel9-CRB RHEL 9.0 libmemcached-awesome-devel rhel9-CRB RHEL 9.0 libmemcached-awesome-tools rhel9-CRB RHEL 9.0 libmpeg2 rhel9-AppStream RHEL 9.0 libmpeg2-devel rhel9-CRB RHEL 9.2 libmspub-devel rhel9-CRB RHEL 9.2 libmypaint rhel9-AppStream RHEL 9.0 libnetapi rhel9-BaseOS RHEL 9.2 libnetapi-devel rhel9-CRB RHEL 9.2 libnvme rhel9-BaseOS RHEL 9.1 libnvme-devel rhel9-CRB RHEL 9.1 libotr rhel9-AppStream RHEL 9.0 libotr-devel rhel9-CRB RHEL 9.0 libpagemaker-devel rhel9-CRB RHEL 9.2 libperf rhel9-CRB RHEL 9.3 libpmem2 rhel9-AppStream RHEL 9.0 libpmem2-debug rhel9-AppStream RHEL 9.0 libpmem2-devel rhel9-AppStream RHEL 9.0 libqrtr-glib rhel9-BaseOS RHEL 9.0 libqxp-devel rhel9-CRB RHEL 9.2 librabbitmq-tools rhel9-AppStream RHEL 9.0 libradospp-devel rhel9-CRB RHEL 9.0 librelp-devel rhel9-CRB RHEL 9.3 libreoffice rhel9-AppStream RHEL 9.2 libreoffice-help-eo rhel9-AppStream RHEL 9.0 libreoffice-langpack-eo rhel9-AppStream RHEL 9.0 libreoffice-langpack-fy rhel9-AppStream RHEL 9.0 libsane-airscan rhel9-AppStream RHEL 9.0 libsbc rhel9-AppStream RHEL 9.0 libsepol-utils rhel9-AppStream RHEL 9.1 libshaderc rhel9-AppStream RHEL 9.0 libshaderc-devel rhel9-CRB RHEL 9.1 libsmartcols-devel rhel9-CRB RHEL 9.2 libsndfile-utils rhel9-AppStream RHEL 9.0 libss-devel rhel9-CRB RHEL 9.4 libstoragemgmt-devel rhel9-CRB RHEL 9.1 libstoragemgmt-nfs-plugin rhel9-AppStream RHEL 9.0 libstoragemgmt-targetd-plugin rhel9-AppStream RHEL 9.0 libtimezonemap-devel rhel9-CRB RHEL 9.4 libtracecmd rhel9-BaseOS RHEL 9.0 libtracecmd-devel rhel9-CRB RHEL 9.0 libtraceevent rhel9-BaseOS RHEL 9.0 libtraceevent-devel rhel9-CRB RHEL 9.0 libtracefs rhel9-BaseOS RHEL 9.0 libtracefs-devel rhel9-CRB RHEL 9.0 libtracker-sparql rhel9-AppStream RHEL 9.0 libtsan2 rhel9-AppStream RHEL 9.1 liburing-devel rhel9-CRB RHEL 9.3 libvala rhel9-CRB RHEL 9.0 libvala-devel rhel9-CRB RHEL 9.0 libvdpau-trace rhel9-AppStream RHEL 9.0 libverto-libev rhel9-BaseOS RHEL 9.0 libvirt-client-qemu rhel9-CRB RHEL 9.2 libvirt-daemon-common rhel9-AppStream RHEL 9.3 libvirt-daemon-lock rhel9-AppStream RHEL 9.3 libvirt-daemon-log rhel9-AppStream RHEL 9.3 libvirt-daemon-plugin-lockd rhel9-AppStream RHEL 9.3 libvirt-daemon-plugin-sanlock rhel9-CRB RHEL 9.3 libvirt-daemon-proxy rhel9-AppStream RHEL 9.3 libvirt-ssh-proxy rhel9-AppStream RHEL 9.5 libvma-utils rhel9-AppStream RHEL 9.0 libwebp-tools rhel9-CRB RHEL 9.2 libwmf-devel rhel9-CRB RHEL 9.1 libwpe rhel9-AppStream RHEL 9.0 libwpe-devel rhel9-CRB RHEL 9.1 libxcrypt-compat rhel9-AppStream RHEL 9.0 libxcvt rhel9-AppStream RHEL 9.2 libxcvt-devel rhel9-CRB RHEL 9.2 libxdp-devel rhel9-CRB RHEL 9.1 libxdp-static rhel9-CRB RHEL 9.1 libzip-tools rhel9-AppStream RHEL 9.4 libzmf-devel rhel9-CRB RHEL 9.2 linux-firmware-whence rhel9-BaseOS RHEL 9.0 lld-test rhel9-AppStream RHEL 9.0 lldpd-devel rhel9-AppStream RHEL 9.5 lmdb rhel9-CRB RHEL 9.0 lorax-docs rhel9-AppStream RHEL 9.0 low-memory-monitor rhel9-AppStream RHEL 9.0 lua-rpm-macros rhel9-AppStream RHEL 9.0 lua-srpm-macros rhel9-AppStream RHEL 9.0 make-latest rhel9-AppStream RHEL 9.5 make441 rhel9-AppStream RHEL 9.5 man-db-cron rhel9-AppStream RHEL 9.2 mariadb-connector-c-doc rhel9-CRB RHEL 9.0 mariadb-connector-c-test rhel9-CRB RHEL 9.0 marshalparser rhel9-CRB RHEL 9.1 maven-openjdk21 rhel9-AppStream RHEL 9.4 maven-surefire-provider-junit5 rhel9-CRB RHEL 9.0 mecab-devel rhel9-CRB RHEL 9.3 memcached-selinux rhel9-AppStream RHEL 9.0 mesa-demos rhel9-AppStream RHEL 9.0 mingw-qemu-ga-win rhel9-AppStream RHEL 9.3 mingw-w64-tools rhel9-CRB RHEL 9.2 mingw32-libgcc rhel9-CRB RHEL 9.1 mingw32-libstdc++ rhel9-CRB RHEL 9.3 mingw32-pcre2 rhel9-CRB RHEL 9.4 mingw32-pcre2-static rhel9-CRB RHEL 9.4 mingw32-srvany rhel9-AppStream RHEL 9.0 mingw64-libgcc rhel9-CRB RHEL 9.1 mingw64-libstdc++ rhel9-CRB RHEL 9.3 mingw64-pcre2 rhel9-CRB RHEL 9.4 mingw64-pcre2-static rhel9-CRB RHEL 9.4 mkfontscale rhel9-AppStream RHEL 9.0 mkpasswd rhel9-AppStream RHEL 9.1 mod_jk rhel9-AppStream RHEL 9.0 mod_lua rhel9-AppStream RHEL 9.0 mod_proxy_cluster rhel9-AppStream RHEL 9.0 mpdecimal rhel9-AppStream RHEL 9.2 mpdecimal++ rhel9-CRB RHEL 9.2 mpdecimal-devel rhel9-CRB RHEL 9.2 mpdecimal-doc rhel9-CRB RHEL 9.2 mpich-autoload rhel9-AppStream RHEL 9.0 mptcpd rhel9-AppStream RHEL 9.0 mypaint-brushes rhel9-AppStream RHEL 9.0 mythes-eo rhel9-AppStream RHEL 9.0 nbdkit-selinux rhel9-AppStream RHEL 9.5 nbdkit-srpm-macros rhel9-CRB RHEL 9.1 netronome-firmware rhel9-BaseOS RHEL 9.0 nfs-utils-coreos rhel9-AppStream RHEL 9.0 nfsv4-client-utils rhel9-AppStream RHEL 9.1 nginx-core rhel9-AppStream RHEL 9.1 nmstate-devel rhel9-CRB RHEL 9.1 nmstate-static rhel9-CRB RHEL 9.1 nodejs-devel rhel9-AppStream RHEL 9.1 nodejs-libs rhel9-AppStream RHEL 9.0 nodejs-packaging rhel9-AppStream RHEL 9.1 nodejs-packaging-bundler rhel9-AppStream RHEL 9.1 npth-devel rhel9-CRB RHEL 9.0 nss_wrapper-libs rhel9-AppStream RHEL 9.1 nvme-stas rhel9-AppStream RHEL 9.1 ocaml-augeas rhel9-CRB RHEL 9.5 ocaml-augeas-devel rhel9-CRB RHEL 9.5 ocaml-brlapi rhel9-CRB RHEL 9.1 ocaml-calendar rhel9-CRB RHEL 9.1 ocaml-calendar-devel rhel9-CRB RHEL 9.1 ocaml-camomile rhel9-CRB RHEL 9.1 ocaml-camomile-data rhel9-CRB RHEL 9.1 ocaml-camomile-devel rhel9-CRB RHEL 9.1 ocaml-csexp rhel9-CRB RHEL 9.1 ocaml-csexp-devel rhel9-CRB RHEL 9.1 ocaml-csv rhel9-CRB RHEL 9.1 ocaml-csv-devel rhel9-CRB RHEL 9.1 ocaml-curses rhel9-CRB RHEL 9.1 ocaml-curses-devel rhel9-CRB RHEL 9.1 ocaml-docs rhel9-CRB RHEL 9.1 ocaml-dune rhel9-CRB RHEL 9.1 ocaml-dune-devel rhel9-CRB RHEL 9.1 ocaml-dune-doc rhel9-CRB RHEL 9.1 ocaml-dune-emacs rhel9-CRB RHEL 9.1 ocaml-fileutils rhel9-CRB RHEL 9.1 ocaml-fileutils-devel rhel9-CRB RHEL 9.1 ocaml-gettext rhel9-CRB RHEL 9.1 ocaml-gettext-devel rhel9-CRB RHEL 9.1 ocaml-libvirt rhel9-CRB RHEL 9.1 ocaml-libvirt-devel rhel9-CRB RHEL 9.1 ocaml-ocamlbuild-doc rhel9-CRB RHEL 9.1 ocaml-source rhel9-CRB RHEL 9.1 ocaml-xml-light rhel9-CRB RHEL 9.1 ocaml-xml-light-devel rhel9-CRB RHEL 9.1 open-vm-tools-salt-minion rhel9-AppStream RHEL 9.1 open-vm-tools-test rhel9-AppStream RHEL 9.0 openblas-serial rhel9-AppStream RHEL 9.0 openexr rhel9-AppStream RHEL 9.0 openexr-devel rhel9-CRB RHEL 9.0 openexr-libs rhel9-AppStream RHEL 9.0 openldap-compat rhel9-BaseOS RHEL 9.0 openmpi-java rhel9-AppStream RHEL 9.0 openslp-devel rhel9-CRB RHEL 9.0 openslp-server rhel9-AppStream RHEL 9.0 openssl-fips-provider rhel9-BaseOS RHEL 9.4 openssl-fips-provider-so rhel9-BaseOS RHEL 9.5 opentelemetry-collector rhel9-AppStream RHEL 9.5 osbuild-depsolve-dnf rhel9-AppStream RHEL 9.4 pam-docs rhel9-AppStream RHEL 9.0 pam_wrapper rhel9-CRB RHEL 9.1 passt rhel9-AppStream RHEL 9.2 passt-selinux rhel9-AppStream RHEL 9.2 pbzip2 rhel9-AppStream RHEL 9.0 pcp-export-pcp2openmetrics rhel9-AppStream RHEL 9.5 pcp-geolocate rhel9-AppStream RHEL 9.4 pcp-pmda-bpf rhel9-AppStream RHEL 9.0 pcp-pmda-farm rhel9-AppStream RHEL 9.4 pcp-pmda-resctrl rhel9-AppStream RHEL 9.4 pcp-pmda-uwsgi rhel9-AppStream RHEL 9.5 pcre2-syntax rhel9-BaseOS RHEL 9.0 pcre2-tools rhel9-CRB RHEL 9.4 perl-BSD-Resource rhel9-AppStream RHEL 9.0 perl-Cyrus rhel9-AppStream RHEL 9.0 perl-DBD-MariaDB rhel9-AppStream RHEL 9.0 perl-ldns rhel9-CRB RHEL 9.1 perl-libxml-perl rhel9-CRB RHEL 9.5 perl-Mail-AuthenticationResults rhel9-AppStream RHEL 9.0 perl-Module-Signature rhel9-AppStream RHEL 9.0 perl-Net-CIDR-Lite rhel9-AppStream RHEL 9.0 perl-Net-DNS-Nameserver rhel9-CRB RHEL 9.2 perl-XString rhel9-CRB RHEL 9.0 pf-bb-config rhel9-AppStream RHEL 9.2 pgvector rhel9-AppStream RHEL 9.5 php-libguestfs rhel9-CRB RHEL 9.1 pinentry-tty rhel9-AppStream RHEL 9.0 pipewire-alsa rhel9-AppStream RHEL 9.0 pipewire-gstreamer rhel9-AppStream RHEL 9.0 pipewire-jack-audio-connection-kit rhel9-AppStream RHEL 9.0 pipewire-jack-audio-connection-kit-devel rhel9-AppStream RHEL 9.0 pipewire-jack-audio-connection-kit-libs rhel9-AppStream RHEL 9.4 pipewire-module-x11 rhel9-AppStream RHEL 9.3 pipewire-pulseaudio rhel9-AppStream RHEL 9.0 pki-jackson-annotations rhel9-AppStream RHEL 9.0 pki-jackson-core rhel9-AppStream RHEL 9.0 pki-jackson-databind rhel9-AppStream RHEL 9.0 pki-jackson-jaxrs-json-provider rhel9-AppStream RHEL 9.0 pki-jackson-jaxrs-providers rhel9-AppStream RHEL 9.0 pki-jackson-module-jaxb-annotations rhel9-AppStream RHEL 9.0 pki-resteasy rhel9-AppStream RHEL 9.3 pki-resteasy-client rhel9-AppStream RHEL 9.0 pki-resteasy-core rhel9-AppStream RHEL 9.0 pki-resteasy-jackson2-provider rhel9-AppStream RHEL 9.0 pki-resteasy-servlet-initializer rhel9-AppStream RHEL 9.4 plotnetcfg rhel9-CRB RHEL 9.0 pmix-pmi rhel9-AppStream RHEL 9.0 pmix-pmi-devel rhel9-CRB RHEL 9.0 pmix-tools rhel9-AppStream RHEL 9.0 poppler-data-devel rhel9-CRB RHEL 9.2 poppler-glib-doc rhel9-CRB RHEL 9.4 postfix-lmdb rhel9-AppStream RHEL 9.3 postgresql-docs rhel9-CRB RHEL 9.1 postgresql-private-devel rhel9-CRB RHEL 9.0 postgresql-private-libs rhel9-AppStream RHEL 9.0 postgresql-static rhel9-CRB RHEL 9.1 postgresql-test-rpm-macros rhel9-AppStream RHEL 9.2 postgresql-upgrade-devel rhel9-CRB RHEL 9.1 power-profiles-daemon rhel9-AppStream RHEL 9.0 procps-ng-devel rhel9-CRB RHEL 9.2 pt-sans-fonts rhel9-AppStream RHEL 9.0 pybind11-devel rhel9-CRB RHEL 9.0 pyparsing-doc rhel9-CRB RHEL 9.0 pyproject-rpm-macros rhel9-CRB RHEL 9.0 pyproject-srpm-macros rhel9-AppStream RHEL 9.2 python-dateutil-doc rhel9-CRB RHEL 9.0 python-packaging-doc rhel9-CRB RHEL 9.0 python-sphinx-doc rhel9-CRB RHEL 9.0 python-sphinx_rtd_theme-doc rhel9-CRB RHEL 9.0 python-unversioned-command rhel9-AppStream RHEL 9.0 python3 rhel9-BaseOS RHEL 9.0 python3-alembic rhel9-AppStream RHEL 9.1 python3-appdirs rhel9-AppStream RHEL 9.0 python3-awscrt rhel9-AppStream RHEL 9.4 python3-babeltrace rhel9-CRB RHEL 9.1 python3-botocore rhel9-AppStream RHEL 9.4 python3-cairo-devel rhel9-CRB RHEL 9.1 python3-capstone rhel9-CRB RHEL 9.2 python3-cepces rhel9-AppStream RHEL 9.4 python3-colorama rhel9-AppStream RHEL 9.5 python3-debug rhel9-CRB RHEL 9.0 python3-devel rhel9-AppStream RHEL 9.0 python3-dnf-plugin-leaves rhel9-AppStream RHEL 9.3 python3-dnf-plugin-modulesync rhel9-AppStream RHEL 9.1 python3-dnf-plugin-show-leaves rhel9-AppStream RHEL 9.3 python3-file-magic rhel9-AppStream RHEL 9.0 python3-flit-core rhel9-CRB RHEL 9.4 python3-gluster rhel9-AppStream RHEL 9.0 python3-gobject-base-noarch rhel9-BaseOS RHEL 9.1 python3-gobject-devel rhel9-CRB RHEL 9.0 python3-greenlet rhel9-AppStream RHEL 9.1 python3-greenlet-devel rhel9-CRB RHEL 9.3 python3-i2c-tools rhel9-AppStream RHEL 9.1 python3-idm-pki rhel9-AppStream RHEL 9.1 python3-imath rhel9-AppStream RHEL 9.0 python3-iniconfig rhel9-CRB RHEL 9.0 python3-keylime rhel9-AppStream RHEL 9.1 python3-lark-parser rhel9-AppStream RHEL 9.1 python3-lasso rhel9-AppStream RHEL 9.2 python3-ldns rhel9-CRB RHEL 9.1 python3-libevdev rhel9-AppStream RHEL 9.0 python3-libfdt rhel9-CRB RHEL 9.1 python3-libgpiod rhel9-AppStream RHEL 9.1 python3-libnvme rhel9-AppStream RHEL 9.1 python3-net-snmp rhel9-AppStream RHEL 9.0 python3-pacemaker rhel9-HighAvailability RHEL 9.3 python3-pefile rhel9-AppStream RHEL 9.3 python3-prompt-toolkit rhel9-AppStream RHEL 9.4 python3-psutil-tests rhel9-CRB RHEL 9.0 python3-pybind11 rhel9-CRB RHEL 9.0 python3-pycdlib rhel9-AppStream RHEL 9.0 python3-pyelftools rhel9-AppStream RHEL 9.0 python3-pyrsistent rhel9-AppStream RHEL 9.0 python3-pytest-subtests rhel9-CRB RHEL 9.0 python3-pytest-timeout rhel9-CRB RHEL 9.0 python3-readthedocs-sphinx-ext rhel9-CRB RHEL 9.0 python3-requests+security rhel9-AppStream RHEL 9.0 python3-requests+socks rhel9-AppStream RHEL 9.0 python3-requests-gssapi rhel9-AppStream RHEL 9.0 python3-resolvelib rhel9-AppStream RHEL 9.0 python3-ruamel-yaml rhel9-CRB RHEL 9.0 python3-ruamel-yaml-clib rhel9-CRB RHEL 9.0 python3-samba-dc rhel9-BaseOS RHEL 9.2 python3-samba-devel rhel9-CRB RHEL 9.2 python3-samba-test rhel9-CRB RHEL 9.2 python3-scapy rhel9-AppStream RHEL 9.0 python3-scour rhel9-AppStream RHEL 9.0 python3-setuptools_scm+toml rhel9-CRB RHEL 9.0 python3-sphinx-latex rhel9-CRB RHEL 9.0 python3-sphinxcontrib-applehelp rhel9-CRB RHEL 9.0 python3-sphinxcontrib-devhelp rhel9-CRB RHEL 9.0 python3-sphinxcontrib-htmlhelp rhel9-CRB RHEL 9.0 python3-sphinxcontrib-httpdomain rhel9-CRB RHEL 9.0 python3-sphinxcontrib-jsmath rhel9-CRB RHEL 9.0 python3-sphinxcontrib-qthelp rhel9-CRB RHEL 9.0 python3-sphinxcontrib-serializinghtml rhel9-CRB RHEL 9.0 python3-sqlalchemy rhel9-AppStream RHEL 9.1 python3-toml rhel9-AppStream RHEL 9.0 python3-tomli rhel9-AppStream RHEL 9.3 python3-tornado rhel9-AppStream RHEL 9.1 python3-urllib-gssapi rhel9-AppStream RHEL 9.0 python3-virt-firmware rhel9-AppStream RHEL 9.2 python3-volume_key rhel9-AppStream RHEL 9.0 python3-wcwidth rhel9-CRB RHEL 9.0 python3-websockets rhel9-AppStream RHEL 9.4 python3.11 rhel9-AppStream RHEL 9.2 python3.11-attrs rhel9-CRB RHEL 9.2 python3.11-cffi rhel9-AppStream RHEL 9.2 python3.11-charset-normalizer rhel9-AppStream RHEL 9.2 python3.11-cryptography rhel9-AppStream RHEL 9.2 python3.11-Cython rhel9-CRB RHEL 9.2 python3.11-debug rhel9-CRB RHEL 9.2 python3.11-devel rhel9-AppStream RHEL 9.2 python3.11-idle rhel9-CRB RHEL 9.2 python3.11-idna rhel9-AppStream RHEL 9.2 python3.11-iniconfig rhel9-CRB RHEL 9.2 python3.11-libs rhel9-AppStream RHEL 9.2 python3.11-lxml rhel9-AppStream RHEL 9.2 python3.11-mod_wsgi rhel9-AppStream RHEL 9.2 python3.11-numpy rhel9-AppStream RHEL 9.2 python3.11-numpy-f2py rhel9-AppStream RHEL 9.2 python3.11-packaging rhel9-CRB RHEL 9.2 python3.11-pip rhel9-AppStream RHEL 9.2 python3.11-pip-wheel rhel9-AppStream RHEL 9.2 python3.11-pluggy rhel9-CRB RHEL 9.2 python3.11-ply rhel9-AppStream RHEL 9.2 python3.11-psycopg2 rhel9-AppStream RHEL 9.2 python3.11-psycopg2-debug rhel9-CRB RHEL 9.2 python3.11-psycopg2-tests rhel9-CRB RHEL 9.2 python3.11-pybind11 rhel9-CRB RHEL 9.2 python3.11-pybind11-devel rhel9-CRB RHEL 9.2 python3.11-pycparser rhel9-AppStream RHEL 9.2 python3.11-PyMySQL rhel9-AppStream RHEL 9.2 python3.11-PyMySQL+rsa rhel9-AppStream RHEL 9.2 python3.11-pyparsing rhel9-CRB RHEL 9.2 python3.11-pysocks rhel9-AppStream RHEL 9.2 python3.11-pytest rhel9-CRB RHEL 9.2 python3.11-pyyaml rhel9-AppStream RHEL 9.2 python3.11-requests rhel9-AppStream RHEL 9.2 python3.11-requests+security rhel9-AppStream RHEL 9.2 python3.11-requests+socks rhel9-AppStream RHEL 9.2 python3.11-scipy rhel9-AppStream RHEL 9.2 python3.11-semantic_version rhel9-CRB RHEL 9.2 python3.11-setuptools rhel9-AppStream RHEL 9.2 python3.11-setuptools-rust rhel9-CRB RHEL 9.2 python3.11-setuptools-wheel rhel9-AppStream RHEL 9.2 python3.11-six rhel9-AppStream RHEL 9.2 python3.11-test rhel9-CRB RHEL 9.2 python3.11-tkinter rhel9-AppStream RHEL 9.2 python3.11-urllib3 rhel9-AppStream RHEL 9.2 python3.11-wheel rhel9-AppStream RHEL 9.2 python3.11-wheel-wheel rhel9-CRB RHEL 9.2 python3.12 rhel9-AppStream RHEL 9.4 python3.12-cffi rhel9-AppStream RHEL 9.4 python3.12-charset-normalizer rhel9-AppStream RHEL 9.4 python3.12-cryptography rhel9-AppStream RHEL 9.4 python3.12-Cython rhel9-CRB RHEL 9.4 python3.12-debug rhel9-CRB RHEL 9.4 python3.12-devel rhel9-AppStream RHEL 9.4 python3.12-flit-core rhel9-CRB RHEL 9.4 python3.12-idle rhel9-CRB RHEL 9.4 python3.12-idna rhel9-AppStream RHEL 9.4 python3.12-iniconfig rhel9-CRB RHEL 9.4 python3.12-libs rhel9-AppStream RHEL 9.4 python3.12-lxml rhel9-AppStream RHEL 9.4 python3.12-mod_wsgi rhel9-AppStream RHEL 9.4 python3.12-numpy rhel9-AppStream RHEL 9.4 python3.12-numpy-f2py rhel9-AppStream RHEL 9.4 python3.12-packaging rhel9-CRB RHEL 9.4 python3.12-pip rhel9-AppStream RHEL 9.4 python3.12-pip-wheel rhel9-AppStream RHEL 9.4 python3.12-pluggy rhel9-CRB RHEL 9.4 python3.12-ply rhel9-AppStream RHEL 9.4 python3.12-psycopg2 rhel9-AppStream RHEL 9.4 python3.12-psycopg2-debug rhel9-CRB RHEL 9.4 python3.12-psycopg2-tests rhel9-CRB RHEL 9.4 python3.12-pybind11 rhel9-CRB RHEL 9.4 python3.12-pybind11-devel rhel9-CRB RHEL 9.4 python3.12-pycparser rhel9-AppStream RHEL 9.4 python3.12-PyMySQL rhel9-AppStream RHEL 9.4 python3.12-PyMySQL+rsa rhel9-AppStream RHEL 9.4 python3.12-pytest rhel9-CRB RHEL 9.4 python3.12-pyyaml rhel9-AppStream RHEL 9.4 python3.12-requests rhel9-AppStream RHEL 9.4 python3.12-scipy rhel9-AppStream RHEL 9.4 python3.12-scipy-tests rhel9-CRB RHEL 9.4 python3.12-semantic_version rhel9-CRB RHEL 9.4 python3.12-setuptools rhel9-AppStream RHEL 9.4 python3.12-setuptools-rust rhel9-CRB RHEL 9.4 python3.12-setuptools-wheel rhel9-CRB RHEL 9.4 python3.12-test rhel9-CRB RHEL 9.4 python3.12-tkinter rhel9-AppStream RHEL 9.4 python3.12-urllib3 rhel9-AppStream RHEL 9.4 python3.12-wheel rhel9-AppStream RHEL 9.4 python3.12-wheel-wheel rhel9-CRB RHEL 9.4 qatlib-service rhel9-AppStream RHEL 9.1 qemu-ga-win rhel9-AppStream RHEL 9.0 qemu-kvm-audio-pa rhel9-AppStream RHEL 9.0 qemu-kvm-block-blkio rhel9-AppStream RHEL 9.3 qemu-kvm-device-display-virtio-gpu rhel9-AppStream RHEL 9.0 qemu-kvm-device-display-virtio-gpu-gl rhel9-AppStream RHEL 9.0 qemu-kvm-device-display-virtio-gpu-pci rhel9-AppStream RHEL 9.0 qemu-kvm-device-display-virtio-gpu-pci-gl rhel9-AppStream RHEL 9.0 qemu-kvm-device-display-virtio-vga rhel9-AppStream RHEL 9.0 qemu-kvm-device-display-virtio-vga-gl rhel9-AppStream RHEL 9.0 qemu-kvm-device-usb-host rhel9-AppStream RHEL 9.0 qemu-kvm-device-usb-redirect rhel9-AppStream RHEL 9.0 qemu-kvm-tools rhel9-AppStream RHEL 9.0 qemu-kvm-ui-egl-headless rhel9-AppStream RHEL 9.0 qemu-pr-helper rhel9-AppStream RHEL 9.0 qpdf rhel9-CRB RHEL 9.1 qpdf-devel rhel9-CRB RHEL 9.2 qt5 rhel9-AppStream RHEL 9.0 qt5-doc rhel9-AppStream RHEL 9.0 qt5-qt3d-doc rhel9-AppStream RHEL 9.0 qt5-qtbase-doc rhel9-AppStream RHEL 9.0 qt5-qtcharts-doc rhel9-AppStream RHEL 9.0 qt5-qtconnectivity-doc rhel9-AppStream RHEL 9.0 qt5-qtdatavis3d-doc rhel9-AppStream RHEL 9.0 qt5-qtdeclarative-doc rhel9-AppStream RHEL 9.0 qt5-qtgamepad-doc rhel9-AppStream RHEL 9.0 qt5-qtgraphicaleffects-doc rhel9-AppStream RHEL 9.0 qt5-qtimageformats-doc rhel9-AppStream RHEL 9.0 qt5-qtlocation-doc rhel9-AppStream RHEL 9.0 qt5-qtmultimedia-doc rhel9-AppStream RHEL 9.0 qt5-qtpurchasing-doc rhel9-AppStream RHEL 9.0 qt5-qtquickcontrols-doc rhel9-AppStream RHEL 9.0 qt5-qtquickcontrols2-doc rhel9-AppStream RHEL 9.0 qt5-qtremoteobjects-doc rhel9-AppStream RHEL 9.0 qt5-qtscript-doc rhel9-AppStream RHEL 9.0 qt5-qtscxml-doc rhel9-AppStream RHEL 9.0 qt5-qtsensors-doc rhel9-AppStream RHEL 9.0 qt5-qtserialbus-doc rhel9-AppStream RHEL 9.0 qt5-qtserialport-doc rhel9-AppStream RHEL 9.0 qt5-qtspeech-doc rhel9-AppStream RHEL 9.0 qt5-qtsvg-doc rhel9-AppStream RHEL 9.0 qt5-qttools-doc rhel9-AppStream RHEL 9.0 qt5-qtvirtualkeyboard-doc rhel9-AppStream RHEL 9.0 qt5-qtwayland-doc rhel9-AppStream RHEL 9.0 qt5-qtwebchannel-doc rhel9-AppStream RHEL 9.0 qt5-qtwebsockets-doc rhel9-AppStream RHEL 9.0 qt5-qtwebview-doc rhel9-AppStream RHEL 9.0 qt5-qtx11extras-doc rhel9-AppStream RHEL 9.0 qt5-qtxmlpatterns-doc rhel9-AppStream RHEL 9.0 realtime-setup rhel9-NFV RHEL 9.0 realtime-tests rhel9-AppStream RHEL 9.0 redhat-display-fonts rhel9-AppStream RHEL 9.0 redhat-cloud-client-configuration rhel9-AppStream RHEL 9.1 redhat-mono-fonts rhel9-AppStream RHEL 9.0 redhat-sb-certs rhel9-CRB RHEL 9.0 redhat-text-fonts rhel9-AppStream RHEL 9.0 resource-agents-cloud rhel9-HighAvailability RHEL 9.0 restore rhel9-BaseOS RHEL 9.0 rhc-devel rhel9-CRB RHEL 9.1 rhel-net-naming-sysattrs rhel9-BaseOS RHEL 9.4 rpm-plugin-audit rhel9-BaseOS RHEL 9.0 rpm-sign-libs rhel9-BaseOS RHEL 9.0 rsyslog-logrotate rhel9-AppStream RHEL 9.0 rtla rhel9-AppStream RHEL 9.2 ruby-bundled-gems rhel9-AppStream RHEL 9.1 rubygem-racc rhel9-AppStream RHEL 9.4 rubygem-thread_order rhel9-CRB RHEL 9.0 rust-analyzer rhel9-AppStream RHEL 9.2 rust-std-static-wasm32-wasip1 rhel9-AppStream RHEL 9.5 rv rhel9-AppStream RHEL 9.3 s390utils rhel9-AppStream RHEL 9.4 s390utils-se-data rhel9-AppStream RHEL 9.4 s-nail rhel9-AppStream RHEL 9.0 samba-dc-libs rhel9-BaseOS RHEL 9.2 samba-dcerpc rhel9-BaseOS RHEL 9.2 samba-gpupdate rhel9-AppStream RHEL 9.5 samba-ldb-ldap-modules rhel9-BaseOS RHEL 9.2 samba-tools rhel9-BaseOS RHEL 9.2 samba-usershares rhel9-BaseOS RHEL 9.2 sane-airscan rhel9-AppStream RHEL 9.0 sdl12-compat rhel9-AppStream RHEL 9.0 sdl12-compat-devel rhel9-CRB RHEL 9.0 setxkbmap rhel9-AppStream RHEL 9.0 sid rhel9-AppStream RHEL 9.0 sid-base-libs rhel9-AppStream RHEL 9.0 sid-iface-libs rhel9-AppStream RHEL 9.0 sid-log-libs rhel9-AppStream RHEL 9.0 sid-mod-block-blkid rhel9-AppStream RHEL 9.0 sid-mod-block-dm-mpath rhel9-AppStream RHEL 9.0 sid-mod-dummies rhel9-AppStream RHEL 9.0 sid-resource-libs rhel9-AppStream RHEL 9.0 sid-tools rhel9-AppStream RHEL 9.0 sip6 rhel9-AppStream RHEL 9.1 source-highlight-devel rhel9-CRB RHEL 9.5 speech-tools-libs rhel9-AppStream RHEL 9.0 ssh-key-dir rhel9-AppStream RHEL 9.0 sssd-idp rhel9-AppStream RHEL 9.1 sssd-passkey rhel9-BaseOS RHEL 9.4 stratisd-tools rhel9-AppStream RHEL 9.3 sudo-python-plugin rhel9-AppStream RHEL 9.0 synce4l rhel9-AppStream RHEL 9.2 sysprof-capture-devel rhel9-AppStream RHEL 9.0 systemd-boot-unsigned rhel9-CRB RHEL 9.2 systemd-oomd rhel9-BaseOS RHEL 9.0 systemd-resolved rhel9-BaseOS RHEL 9.0 systemd-rpm-macros rhel9-BaseOS RHEL 9.0 systemd-ukify rhel9-AppStream RHEL 9.5 tesseract-langpack-eng rhel9-AppStream RHEL 9.0 tesseract-tessdata-doc rhel9-AppStream RHEL 9.0 tex-preview rhel9-AppStream RHEL 9.0 texlive-alphalph rhel9-AppStream RHEL 9.0 texlive-atbegshi rhel9-AppStream RHEL 9.0 texlive-attachfile2 rhel9-AppStream RHEL 9.0 texlive-atveryend rhel9-AppStream RHEL 9.0 texlive-auxhook rhel9-AppStream RHEL 9.0 texlive-bigintcalc rhel9-AppStream RHEL 9.0 texlive-bitset rhel9-AppStream RHEL 9.0 texlive-bookmark rhel9-AppStream RHEL 9.0 texlive-catchfile rhel9-AppStream RHEL 9.0 texlive-colorprofiles rhel9-AppStream RHEL 9.0 texlive-dehyph rhel9-AppStream RHEL 9.0 texlive-epstopdf-pkg rhel9-AppStream RHEL 9.0 texlive-etexcmds rhel9-AppStream RHEL 9.0 texlive-etoc rhel9-AppStream RHEL 9.0 texlive-footnotehyper rhel9-AppStream RHEL 9.0 texlive-gettitlestring rhel9-AppStream RHEL 9.0 texlive-gnu-freefont rhel9-CRB RHEL 9.0 texlive-grfext rhel9-AppStream RHEL 9.0 texlive-grffile rhel9-AppStream RHEL 9.0 texlive-hanging rhel9-AppStream RHEL 9.0 texlive-hobsub rhel9-AppStream RHEL 9.0 texlive-hologo rhel9-AppStream RHEL 9.0 texlive-hycolor rhel9-AppStream RHEL 9.0 texlive-hyphenex rhel9-AppStream RHEL 9.0 texlive-ifplatform rhel9-AppStream RHEL 9.0 texlive-infwarerr rhel9-AppStream RHEL 9.0 texlive-intcalc rhel9-AppStream RHEL 9.0 texlive-kvdefinekeys rhel9-AppStream RHEL 9.0 texlive-kvoptions rhel9-AppStream RHEL 9.0 texlive-kvsetkeys rhel9-AppStream RHEL 9.0 texlive-l3backend rhel9-AppStream RHEL 9.0 texlive-latexbug rhel9-AppStream RHEL 9.0 texlive-letltxmacro rhel9-AppStream RHEL 9.0 texlive-listofitems rhel9-AppStream RHEL 9.0 texlive-ltxcmds rhel9-AppStream RHEL 9.0 texlive-luahbtex rhel9-AppStream RHEL 9.0 texlive-lwarp rhel9-AppStream RHEL 9.0 texlive-minitoc rhel9-AppStream RHEL 9.0 texlive-modes rhel9-AppStream RHEL 9.0 texlive-newfloat rhel9-AppStream RHEL 9.0 texlive-newunicodechar rhel9-AppStream RHEL 9.0 texlive-notoccite rhel9-AppStream RHEL 9.0 texlive-obsolete rhel9-AppStream RHEL 9.0 texlive-pdfcolmk rhel9-AppStream RHEL 9.0 texlive-pdfescape rhel9-AppStream RHEL 9.0 texlive-pdflscape rhel9-AppStream RHEL 9.0 texlive-pdftexcmds rhel9-AppStream RHEL 9.0 texlive-ragged2e rhel9-AppStream RHEL 9.0 texlive-refcount rhel9-AppStream RHEL 9.0 texlive-rerunfilecheck rhel9-AppStream RHEL 9.0 texlive-sansmathaccent rhel9-AppStream RHEL 9.0 texlive-stackengine rhel9-AppStream RHEL 9.0 texlive-stringenc rhel9-AppStream RHEL 9.0 texlive-texlive-scripts-extra rhel9-AppStream RHEL 9.0 texlive-translator rhel9-AppStream RHEL 9.0 texlive-ucharcat rhel9-AppStream RHEL 9.0 texlive-uniquecounter rhel9-AppStream RHEL 9.0 texlive-wasy-type1 rhel9-AppStream RHEL 9.0 texlive-zref rhel9-AppStream RHEL 9.0 tomcat rhel9-AppStream RHEL 9.2 tomcat-admin-webapps rhel9-AppStream RHEL 9.2 tomcat-docs-webapp rhel9-AppStream RHEL 9.2 tomcat-el-3.0-api rhel9-AppStream RHEL 9.2 tomcat-jsp-2.3-api rhel9-AppStream RHEL 9.2 tomcat-lib rhel9-AppStream RHEL 9.2 tomcat-servlet-4.0-api rhel9-AppStream RHEL 9.2 tomcat-webapps rhel9-AppStream RHEL 9.2 totem-video-thumbnailer rhel9-AppStream RHEL 9.0 tpm2-pkcs11 rhel9-AppStream RHEL 9.0 tpm2-pkcs11-tools rhel9-AppStream RHEL 9.0 tuned-ppd rhel9-AppStream RHEL 9.5 tuned-profiles-postgresql rhel9-AppStream RHEL 9.1 tuned-profiles-spectrumscale rhel9-AppStream RHEL 9.0 twolame rhel9-AppStream RHEL 9.0 uchardet rhel9-CRB RHEL 9.0 uchardet-devel rhel9-CRB RHEL 9.1 uki-direct rhel9-AppStream RHEL 9.4 unbound-devel rhel9-CRB RHEL 9.1 unifdef rhel9-CRB RHEL 9.3 uresourced rhel9-AppStream RHEL 9.0 usbredir-server rhel9-AppStream RHEL 9.2 utf8proc-devel rhel9-CRB RHEL 9.0 util-linux-core rhel9-BaseOS RHEL 9.0 uuid-c++ rhel9-AppStream RHEL 9.0 uuid-dce rhel9-AppStream RHEL 9.0 v8-12.4-devel rhel9-AppStream RHEL 9.5 virt-p2v rhel9-AppStream RHEL 9.0 virt-win-reg rhel9-AppStream RHEL 9.0 virtiofsd rhel9-AppStream RHEL 9.0 voikko-fi rhel9-AppStream RHEL 9.0 vulkan-utility-libraries-devel rhel9-CRB RHEL 9.4 vulkan-volk-devel rhel9-AppStream RHEL 9.4 WALinuxAgent-cvm rhel9-CRB RHEL 9.3 wayland-utils rhel9-AppStream RHEL 9.0 waypipe rhel9-AppStream RHEL 9.0 webrtc-audio-processing-devel rhel9-CRB RHEL 9.5 wireguard-tools rhel9-AppStream RHEL 9.0 wireless-regdb rhel9-BaseOS RHEL 9.0 wireplumber rhel9-AppStream RHEL 9.0 wireplumber-libs rhel9-AppStream RHEL 9.0 wpebackend-fdo rhel9-AppStream RHEL 9.0 wpebackend-fdo-devel rhel9-CRB RHEL 9.1 xcb-util-cursor rhel9-AppStream RHEL 9.4 xcb-util-cursor-devel rhel9-AppStream RHEL 9.4 xdg-dbus-proxy rhel9-AppStream RHEL 9.0 xdg-desktop-portal-gnome rhel9-AppStream RHEL 9.1 xfsprogs-xfs_scrub rhel9-AppStream RHEL 9.0 xhtml2fo-style-xsl rhel9-AppStream RHEL 9.0 xkbcomp rhel9-AppStream RHEL 9.0 xmlstarlet rhel9-AppStream RHEL 9.1 xmlto-tex rhel9-AppStream RHEL 9.0 xmlto-xhtml rhel9-AppStream RHEL 9.0 xmvn-tools rhel9-CRB RHEL 9.0 xorg-x11-server-source rhel9-CRB RHEL 9.1 xorg-x11-server-Xwayland-devel rhel9-CRB RHEL 9.5 xxhash rhel9-AppStream RHEL 9.1 xxhash-devel rhel9-CRB RHEL 9.1 xxhash-doc rhel9-CRB RHEL 9.1 xxhash-libs rhel9-AppStream RHEL 9.1 yara rhel9-AppStream RHEL 9.1 yara-devel rhel9-CRB RHEL 9.1 zram-generator rhel9-AppStream RHEL 9.0 A.2. Package replacements The following table lists packages that were replaced, renamed, merged, or split: Original package(s) New package(s) Changed since Note apache-commons-lang (javapackages-tools:201801), apache-commons-lang3 (javapackages-tools:201801) apache-commons-lang3 RHEL 9.0 apache-commons-lang (pki-deps:10.6), apache-commons-lang3 (maven:3.5, maven:3.6) apache-commons-lang3 RHEL 9.0 bind-libs-lite bind-libs RHEL 9.0 bind-lite-devel bind-devel RHEL 9.0 binutils binutils, binutils-gold RHEL 9.0 clutter-gst2 clutter-gst3 RHEL 9.0 crda wireless-regdb RHEL 9.0 dnf-plugin-subscription-manager, subscription-manager subscription-manager RHEL 9.0 evolution-data-server evolution-data-server, evolution-data-server-ui RHEL 9.4 evolution-data-server-devel evolution-data-server-devel, evolution-data-server-ui-devel RHEL 9.4 fapolicyd-dnf-plugin rpm-plugin-fapolicyd RHEL 9.1 fio fio, fio-engine-dev-dax, fio-engine-http, fio-engine-libaio, fio-engine-libpmem, fio-engine-nbd, fio-engine-pmemblk, fio-engine-rados, fio-engine-rbd, fio-engine-rdma RHEL 9.0 fio fio, fio-engine-http, fio-engine-libaio, fio-engine-nbd, fio-engine-rados, fio-engine-rbd, fio-engine-rdma RHEL 9.0 flex-devel libfl-static RHEL 9.0 fontpackages-devel fonts-rpm-macros RHEL 9.0 fontpackages-filesystem fonts-filesystem RHEL 9.0 gcc-toolset-12-binutils gcc-toolset-13-binutils RHEL 9.3 genisoimage xorriso RHEL 9.0 The genisoimage package has been replaced by the xorriso package, which now provides the genisoimage command. glassfish-jaxb-api (pki-deps:10.6) jaxb-api RHEL 9.0 glassfish-jaxb-runtime (pki-deps:10.6) jaxb-impl RHEL 9.0 gnome-session-kiosk-session gnome-kiosk RHEL 9.0 google-crosextra-caladea-fonts ht-caladea-fonts RHEL 9.0 google-crosextra-carlito-fonts google-carlito-fonts RHEL 9.0 google-noto-mono-fonts google-noto-sans-mono-fonts RHEL 9.0 guava (maven:3.6), guava20 (maven:3.5) guava RHEL 9.0 guava20 (javapackages-tools:201801) guava RHEL 9.0 hardlink util-linux-core RHEL 9.0 hesiod compat-hesiod RHEL 9.0 ht-caladea-fonts google-crosextra-caladea-fonts RHEL 9.3 httpcomponents-client (javapackages-tools:201801), jakarta-commons-httpclient (javapackages-tools:201801) httpcomponents-client RHEL 9.0 The jakarta-commons-httpclient package has been replaced by the httpcomponents-client package, which has a slightly different API. You must port code changes from jakarta-commons-httpclient to httpcomponents-client . httpcomponents-client (maven:3.5, maven:3.6), jakarta-commons-httpclient (pki-deps:10.6) httpcomponents-client RHEL 9.0 ibus-kkc ibus-anthy RHEL 9.0 idm-pki-acme (pki-core:10.6) pki-acme RHEL 9.0 idm-pki-base (pki-core:10.6) pki-base RHEL 9.0 idm-pki-base-java (pki-core:10.6) pki-base-java RHEL 9.0 idm-pki-ca (pki-core:10.6) pki-ca RHEL 9.0 idm-pki-kra (pki-core:10.6) pki-kra RHEL 9.0 idm-pki-server (pki-core:10.6) pki-server RHEL 9.0 idm-pki-symkey (pki-core:10.6) pki-symkey RHEL 9.0 idm-pki-tools (pki-core:10.6) pki-tools RHEL 9.0 idm-tomcatjss idm-jss-tomcat RHEL 9.4 ilmbase imath, openexr-devel RHEL 9.0 initscripts initscripts, initscripts-rename-device, initscripts-service RHEL 9.0 inkscape1 inkscape RHEL 9.0 inkscape1-docs inkscape-docs RHEL 9.0 inkscape1-view inkscape-view RHEL 9.0 ipa-client (idm:client), ipa-client (idm:DL1) ipa-client RHEL 9.0 ipa-client-common (idm:client), ipa-client-common (idm:DL1) ipa-client-common RHEL 9.0 ipa-client-epn (idm:client), ipa-client-epn (idm:DL1) ipa-client-epn RHEL 9.0 ipa-client-samba (idm:client), ipa-client-samba (idm:DL1) ipa-client-samba RHEL 9.0 ipa-common (idm:client), ipa-common (idm:DL1) ipa-common RHEL 9.0 ipa-healthcheck-core (idm:client), ipa-healthcheck-core (idm:DL1) ipa-healthcheck-core RHEL 9.0 ipa-selinux (idm:client), ipa-selinux (idm:DL1) ipa-selinux RHEL 9.0 iptables, iptables-arptables, iptables-ebtables iptables-nft RHEL 9.0 iptables-services iptables-nft-services RHEL 9.0 istack-commons jaxb-istack-commons RHEL 9.0 jackson-annotations (pki-deps:10.6) pki-jackson-annotations RHEL 9.0 jackson-core (pki-deps:10.6) pki-jackson-core RHEL 9.0 jackson-databind (pki-deps:10.6) pki-jackson-databind RHEL 9.0 jackson-jaxrs-json-provider (pki-deps:10.6) pki-jackson-jaxrs-json-provider RHEL 9.0 jackson-jaxrs-providers (pki-deps:10.6) pki-jackson-jaxrs-providers RHEL 9.0 jackson-module-jaxb-annotations (pki-deps:10.6) pki-jackson-module-jaxb-annotations RHEL 9.0 javamail (javapackages-tools:201801) jakarta-mail RHEL 9.0 The javamail package has been replaced with the jakarta-mail package, which is API-compatible. Code changes might be required to port from javamail to jakarta-mail . jss, pki-symkey idm-jss RHEL 9.1 kernel-abi-whitelists kernel-abi-stablelists RHEL 9.0 khmeros-base-fonts khmer-os-content-fonts, khmer-os-system-fonts RHEL 9.0 khmeros-battambang-fonts khmer-os-battambang-fonts RHEL 9.0 khmeros-bokor-fonts khmer-os-bokor-fonts RHEL 9.0 khmeros-handwritten-fonts khmer-os-fasthand-fonts, khmer-os-freehand-fonts RHEL 9.0 khmeros-metal-chrieng-fonts khmer-os-metal-chrieng-fonts RHEL 9.0 khmeros-muol-fonts khmer-os-muol-fonts, khmer-os-muol-pali-fonts RHEL 9.0 khmeros-siemreap-fonts khmer-os-siemreap-fonts RHEL 9.0 ldapjdk idm-ldapjdk RHEL 9.1 libguestfs-tools (virt:rhel) virt-win-reg RHEL 9.0 libguestfs-tools-c (virt:rhel) guestfs-tools RHEL 9.0 libmemcached libmemcached-awesome, libmemcached-awesome-tools RHEL 9.0 The libmemcached library has been replaced by the libmemcached-awesome fork. The package has also been moved from the AppStream repository to the unsupported CodeReady Linux Builder repository. libmemcached-devel libmemcached-awesome-devel RHEL 9.0 libmemcached-libs libmemcached-awesome RHEL 9.0 libvirt-lock-sanlock libvirt-daemon-plugin-sanlock RHEL 9.3 lorax-composer osbuild-composer RHEL 9.0 mailx s-nail RHEL 9.0 The mailx mail processing system has been replaced by s-nail . The s-nail utility is compatible with mailx and adds numerous new features. The mailx package is no longer maintained in upstream. maven-artifact-resolver (javapackages-tools:201801), maven-artifact-transfer (javapackages-tools:201801) maven-artifact-transfer RHEL 9.0 The maven-artifact-resolver package has been replaced by the maven-artifact-transfer package, which should be API-compatible. Code changes might be required to port from maven-artifact-resolver to maven-artifact-transfer . mesa-khr-devel libglvnd-devel RHEL 9.0 mesa-libGLES libglvnd-gles RHEL 9.0 mesa-vulkan-devel mesa-vulkan-drivers RHEL 9.0 metacity gnome-kiosk RHEL 9.0 The metacity package has been replaced with the gnome-kiosk package, which has similar functionality. OpenEXR-libs openexr RHEL 9.0 openssl-libs openssl-fips-provider, openssl-libs RHEL 9.4 pacemaker pacemaker, python3-pacemaker RHEL 9.3 paratype-pt-sans-fonts pt-sans-fonts RHEL 9.0 perl (perl:5.24) perl-AutoLoader, perl-AutoSplit, perl-autouse, perl-B, perl-base, perl-Benchmark, perl-blib, perl-Class-Struct, perl-Config-Extensions, perl-DBM_Filter, perl-debugger, perl-deprecate, perl-diagnostics, perl-DirHandle, perl-doc, perl-Dumpvalue, perl-DynaLoader, perl-encoding-warnings, perl-English, perl-ExtUtils-Constant, perl-Fcntl, perl-fields, perl-File-Basename, perl-File-Compare, perl-File-Copy, perl-File-DosGlob, perl-File-Find, perl-File-stat, perl-FileCache, perl-FileHandle, perl-filetest, perl-FindBin, perl-GDBM_File, perl-Getopt-Std, perl-Hash-Util, perl-Hash-Util-FieldHash, perl-I18N-Collate, perl-I18N-Langinfo, perl-I18N-LangTags, perl-if, perl-interpreter, perl-IPC-Open3, perl-less, perl-lib, perl-libs, perl-locale, perl-meta-notation, perl-mro, perl-NDBM_File, perl-Net, perl-, perl-ODBM_File, perl-Opcode, perl-overload, perl-overloading, perl-ph, perl-Pod-Functions, perl-POSIX, perl-Safe, perl-Search-Dict, perl-SelectSaver, perl-sigtrap, perl-sort, perl-subs, perl-Symbol, perl-Sys-Hostname, perl-Term-Complete, perl-Term-ReadLine, perl-Text-Abbrev, perl-Thread, perl-Thread-Semaphore, perl-Tie, perl-Tie-File, perl-Tie-Memoize, perl-Tie-RefHash, perl-Time, perl-Unicode-UCD, perl-User-pwent, perl-vars, perl-vmsish RHEL 9.0 perl-core (perl:5.24) perl RHEL 9.0 perl-interpreter perl-AutoLoader, perl-AutoSplit, perl-autouse, perl-B, perl-base, perl-Benchmark, perl-blib, perl-Class-Struct, perl-Config-Extensions, perl-DBM_Filter, perl-debugger, perl-deprecate, perl-diagnostics, perl-DirHandle, perl-doc, perl-Dumpvalue, perl-DynaLoader, perl-encoding-warnings, perl-English, perl-ExtUtils-Constant, perl-Fcntl, perl-fields, perl-File-Basename, perl-File-Compare, perl-File-Copy, perl-File-DosGlob, perl-File-Find, perl-File-stat, perl-FileCache, perl-FileHandle, perl-filetest, perl-FindBin, perl-GDBM_File, perl-Getopt-Std, perl-Hash-Util, perl-Hash-Util-FieldHash, perl-I18N-Collate, perl-I18N-Langinfo, perl-I18N-LangTags, perl-if, perl-interpreter, perl-IPC-Open3, perl-less, perl-lib, perl-locale, perl-meta-notation, perl-mro, perl-NDBM_File, perl-Net, perl-, perl-ODBM_File, perl-Opcode, perl-overload, perl-overloading, perl-ph, perl-Pod-Functions, perl-POSIX, perl-Safe, perl-Search-Dict, perl-SelectSaver, perl-sigtrap, perl-sort, perl-subs, perl-Symbol, perl-Sys-Hostname, perl-Term-Complete, perl-Term-ReadLine, perl-Text-Abbrev, perl-Thread, perl-Thread-Semaphore, perl-Tie, perl-Tie-File, perl-Tie-Memoize, perl-Tie-RefHash, perl-Time, perl-Unicode-UCD, perl-User-pwent, perl-vars, perl-vmsish RHEL 9.0 php-pecl-xdebug php-pecl-xdebug3 RHEL 9.0 pipewire-jack-audio-connection-kit pipewire-jack-audio-connection-kit, pipewire-jack-audio-connection-kit-libs RHEL 9.4 pki-acme idm-pki-acme RHEL 9.1 pki-base idm-pki-base RHEL 9.1 pki-base-java idm-pki-java RHEL 9.1 pki-ca idm-pki-ca RHEL 9.1 pki-kra idm-pki-kra RHEL 9.1 pki-server idm-pki-server RHEL 9.1 pki-tools idm-pki-tools RHEL 9.1 platform-python, python2 (python27:2.7), python36 (python36:3.6), python38 (python38:3.8), python39 (python39:3.9) python3 RHEL 9.0 platform-python-debug, python2-debug (python27:2.7), python36-debug (python36:3.6), python38-debug (python38:3.8), python39-debug (python39-devel:3.9) python3-debug RHEL 9.0 platform-python-devel, python2-devel (python27:2.7), python36-devel (python36:3.6), python38-devel (python38:3.8), python39-devel (python39:3.9) python3-devel RHEL 9.0 platform-python-pip, python2-pip (python27:2.7), python3-pip, python38-pip (python38:3.8), python39-pip (python39:3.9) python3-pip RHEL 9.0 platform-python-setuptools, python2-setuptools (python27:2.7), python3-setuptools, python38-setuptools (python38:3.8), python39-setuptools (python39:3.9) python3-setuptools RHEL 9.0 podman (container-tools:rhel8), podman-manpages (container-tools:rhel8) podman RHEL 9.0 podman-catatonit podman RHEL 9.2 The podman-catatonit package has been replaced by functionality within the podman package directly. Note that no additional subpackage is required. podman-manpages (container-tools:rhel8) podman RHEL 9.0 postgresql-upgrade-devel (postgresql:12), postgresql-upgrade-devel (postgresql:13) postgresql-upgrade-devel RHEL 9.0 pulseaudio pipewire-pulseaudio RHEL 9.0 The pulseaudio server implementation has been replaced by the pipewire-pulseaudio implementation. Note that only the server implementation has been switched. The pulseaudio client libraries are still in use. pygobject2 (gimp:2.8) python3-gobject RHEL 9.0 pygobject2-codegen (gimp:2.8) python3-gobject-base RHEL 9.0 pygobject2-devel (gimp:2.8) python3-gobject-devel RHEL 9.0 pygobject3-devel python3-gobject-devel RHEL 9.0 python2-attrs (python27:2.7), python3-attrs, python38-attrs (python38-devel:3.8), python39-attrs (python39-devel:3.9) python3-attrs RHEL 9.0 python2-babel (python27:2.7), python3-babel, python38-babel (python38:3.8) python3-babel RHEL 9.0 python2-chardet (python27:2.7), python3-chardet, python38-chardet (python38:3.8), python39-chardet (python39:3.9) python3-chardet RHEL 9.0 python2-Cython (python27:2.7), python3-Cython, python38-Cython (python38:3.8), python39-Cython (python39-devel:3.9) python3-Cython RHEL 9.0 python2-dns (python27:2.7), python3-dns python3-dns RHEL 9.0 python2-docutils (python27:2.7), python3-docutils python3-docutils (python36:3.6) RHEL 9.0 python2-idna (python27:2.7), python38-idna (python38:3.8), python39-idna (python39:3.9) python3-idna RHEL 9.0 python2-jinja2 (python27:2.7), python3-jinja2, python38-jinja2 (python38:3.8) python3-jinja2 RHEL 9.0 python2-libs (python27:2.7), python3-libs, python38-libs (python38:3.8), python39-libs (python39:3.9) python3-libs RHEL 9.0 python2-lxml (python27:2.7), python3-lxml, python38-lxml (python38:3.8), python39-lxml (python39:3.9) python3-lxml RHEL 9.0 python2-markupsafe (python27:2.7), python3-markupsafe, python38-markupsafe (python38:3.8) python3-markupsafe RHEL 9.0 python2-numpy (python27:2.7), python38-numpy (python38:3.8), python39-numpy (python39:3.9) python3-numpy RHEL 9.0 python2-numpy-f2py (python27:2.7), python38-numpy-f2py (python38:3.8), python39-numpy-f2py (python39:3.9) python3-numpy-f2py RHEL 9.0 python2-pip-wheel (python27:2.7), python3-pip-wheel, python38-pip-wheel (python38:3.8), python39-pip-wheel (python39:3.9) python3-pip-wheel RHEL 9.0 python2-pluggy (python27:2.7), python3-pluggy, python38-pluggy (python38-devel:3.8), python39-pluggy (python39-devel:3.9) python3-pluggy RHEL 9.0 python2-psycopg2 (python27:2.7), python38-psycopg2 (python38:3.8), python39-psycopg2 (python39:3.9) python3-psycopg2 RHEL 9.0 python2-py (python27:2.7), python3-py, python38-py (python38-devel:3.8), python39-py (python39-devel:3.9) python3-py RHEL 9.0 python2-pygments (python27:2.7), python3-pygments (python36:3.6) python3-pygments RHEL 9.0 python2-PyMySQL (python27:2.7), python3-PyMySQL (python36:3.6), python38-PyMySQL (python38:3.8), python39-PyMySQL (python39:3.9) python3-PyMySQL RHEL 9.0 python2-pysocks (python27:2.7), python3-pysocks, python38-pysocks (python38:3.8), python39-pysocks (python39:3.9) python3-pysocks RHEL 9.0 python2-pytest (python27:2.7), python3-pytest, python38-pytest (python38-devel:3.8), python39-pytest (python39-devel:3.9) python3-pytest RHEL 9.0 python2-pytz (python27:2.7), python3-pytz, python38-pytz (python38:3.8) python3-pytz RHEL 9.0 python2-pyyaml (python27:2.7), python3-pyyaml, python38-pyyaml (python38:3.8), python39-pyyaml (python39:3.9) python3-pyyaml RHEL 9.0 python2-requests (python27:2.7), python3-requests, python38-requests (python38:3.8), python39-requests (python39:3.9) python3-requests RHEL 9.0 python2-rpm-macros (python27:2.7), python3-rpm-macros, python36-rpm-macros (python36:3.6), python38-rpm-macros (python38:3.8), python39-rpm-macros (python39:3.9) python3-rpm-macros RHEL 9.0 python2-scipy (python27:2.7), python3-scipy (python36:3.6), python38-scipy (python38:3.8), python39-scipy (python39:3.9) python3-scipy RHEL 9.0 python2-setuptools-wheel (python27:2.7), python3-setuptools-wheel, python38-setuptools-wheel (python38:3.8), python39-setuptools-wheel (python39:3.9) python3-setuptools-wheel RHEL 9.0 python2-setuptools_scm (python27:2.7), python3-setuptools_scm python3-setuptools_scm RHEL 9.0 python2-six (python27:2.7), python3-six, python38-six (python38:3.8), python39-six (python39:3.9) python3-six RHEL 9.0 python2-test (python27:2.7), python3-test, python38-test (python38:3.8), python39-test (python39:3.9) python3-test RHEL 9.0 python2-tkinter (python27:2.7), python3-tkinter, python38-tkinter (python38:3.8), python39-tkinter (python39:3.9) python3-tkinter RHEL 9.0 python2-urllib3 (python27:2.7), python3-urllib3, python38-urllib3 (python38:3.8), python39-urllib3 (python39:3.9) python3-urllib3 RHEL 9.0 python2-wheel (python27:2.7), python3-wheel (python36:3.6), python38-wheel (python38:3.8), python39-wheel (python39:3.9) python3-wheel RHEL 9.0 python2-wheel-wheel (python27:2.7), python3-wheel-wheel (python36:3.6), python38-wheel-wheel (python38:3.8), python39-wheel-wheel (python39:3.9) python3-wheel-wheel RHEL 9.0 python3-idle, python38-idle (python38:3.8), python39-idle (python39:3.9) python3-idle RHEL 9.0 python3-idm-pki (pki-core:10.6) python3-pki RHEL 9.0 python3-ipaclient (idm:client), python3-ipaclient (idm:DL1) python3-ipaclient RHEL 9.0 python3-ipalib (idm:client), python3-ipalib (idm:DL1) python3-ipalib RHEL 9.0 python3-jwcrypto (idm:client), python3-jwcrypto (idm:DL1) python3-jwcrypto RHEL 9.0 python3-magic python3-file-magic RHEL 9.0 python3-packaging, python38-packaging (python38-devel:3.8), python39-packaging (python39-devel:3.9) python3-packaging RHEL 9.0 python3-pki python3-idm-pki RHEL 9.1 python3-pyparsing, python38-pyparsing (python38-devel:3.8), python39-pyparsing (python39-devel:3.9) python3-pyparsing RHEL 9.0 python3-pyusb (idm:client), python3-pyusb (idm:DL1) python3-pyusb RHEL 9.0 python3-qrcode (idm:DL1, idm:client) python3-qrcode-core RHEL 9.0 python3-yubico (idm:client), python3-yubico (idm:DL1) python3-yubico RHEL 9.0 python38-cffi (python38:3.8), python39-cffi (python39:3.9) python3-cffi RHEL 9.0 python38-cryptography (python38:3.8), python39-cryptography (python39:3.9) python3-cryptography RHEL 9.0 python38-mod_wsgi (python38:3.8), python39-mod_wsgi (python39:3.9) python3-mod_wsgi RHEL 9.0 python38-ply (python38:3.8), python39-ply (python39:3.9) python3-ply RHEL 9.0 python38-psutil (python38:3.8), python39-psutil (python39:3.9) python3-psutil RHEL 9.0 python38-pycparser (python38:3.8), python39-pycparser (python39:3.9) python3-pycparser RHEL 9.0 python38-wcwidth (python38-devel:3.8), python39-wcwidth (python39-devel:3.9) python3-wcwidth RHEL 9.0 python39-iniconfig (python39-devel:3.9) python3-iniconfig RHEL 9.0 python39-pybind11 (python39-devel:3.9) python3-pybind11 RHEL 9.0 python39-pybind11-devel (python39-devel:3.9) pybind11-devel RHEL 9.0 python39-toml (python39:3.9) python3-toml RHEL 9.0 qatlib qatlib, qatlib-service RHEL 9.1 qemu-ga-win mingw-qemu-ga-win RHEL 9.3 qemu-kvm ksmtuned, qemu-kvm RHEL 9.0 qemu-kvm-common (virt:rhel) qemu-kvm-common, virtiofsd RHEL 9.0 resource-agents-aliyun, resource-agents-gcp resource-agents-cloud RHEL 9.0 resteasy (pki-deps:10.6) pki-resteasy-client, pki-resteasy-core, pki-resteasy-jackson2-provider, pki-resteasy-jaxb-provider RHEL 9.0 rng-tools jitterentropy, jitterentropy-devel, rng-tools RHEL 9.0 rpm rpm, rpm-plugin-audit RHEL 9.0 rpm-build-libs rpm-build-libs, rpm-sign-libs RHEL 9.0 rsyslog rsyslog, rsyslog-logrotate RHEL 9.0 rt-setup realtime-setup RHEL 9.0 rt-setup realtime-setup RHEL 9.0 rt-tests realtime-tests RHEL 9.0 ruby-irb (ruby:2.5) rubygem-irb RHEL 9.0 rubygem-did_you_mean (ruby:2.5, ruby:2.6) ruby-default-gems RHEL 9.0 rubygem-openssl (ruby:2.5, ruby:2.6, ruby:2.7) ruby-default-gems RHEL 9.0 s390utils-base s390utils-base, s390utils-se-data RHEL 9.4 SDL sdl12-compat RHEL 9.0 SDL-devel sdl12-compat-devel RHEL 9.0 texlive-ifetex, texlive-ifluatex, texlive-ifxetex texlive-iftex RHEL 9.0 texlive-tetex texlive-texlive-scripts RHEL 9.0 tomcatjss idm-tomcatjss RHEL 9.1 trace-cmd libtracecmd, libtracecmd-devel, trace-cmd RHEL 9.0 util-linux util-linux, util-linux-core RHEL 9.0 vala-devel libvala-devel RHEL 9.0 wodim cdrskin RHEL 9.0 The wodim package has been replaced by the cdrskin package. The cdrecord executable provided by cdrskin is compatible with cdrecord provided by wodim . xfsprogs xfsprogs, xfsprogs-xfs_scrub RHEL 9.0 xinetd systemd RHEL 9.0 The xinetd package is not available in RHEL 9. Its functionality is now provided by systemd . For more information, see the Red Hat Knowledgebase solution How to convert xinetd service to systemd . xorg-x11-font-utils mkfontscale RHEL 9.0 xorg-x11-xkb-utils setxkbmap, xkbcomp RHEL 9.0 A.3. Moved packages The following packages were moved between repositories within RHEL 9: Package Original repository* Current repository* Changed since aajohan-comfortaa-fonts rhel8-BaseOS rhel9-AppStream RHEL 9.0 adobe-source-code-pro-fonts rhel9-AppStream rhel9-BaseOS RHEL 9.2 alsa-sof-firmware rhel8-BaseOS rhel9-AppStream RHEL 9.0 ant rhel8-CRB rhel9-AppStream RHEL 9.0 ant-antlr rhel8-CRB rhel9-AppStream RHEL 9.0 ant-apache-bcel rhel8-CRB rhel9-AppStream RHEL 9.0 ant-apache-bsf rhel8-CRB rhel9-AppStream RHEL 9.0 ant-apache-oro rhel8-CRB rhel9-AppStream RHEL 9.0 ant-apache-regexp rhel8-CRB rhel9-AppStream RHEL 9.0 ant-apache-resolver rhel8-CRB rhel9-AppStream RHEL 9.0 ant-apache-xalan2 rhel8-CRB rhel9-AppStream RHEL 9.0 ant-commons-logging rhel8-CRB rhel9-AppStream RHEL 9.0 ant-commons-net rhel8-CRB rhel9-AppStream RHEL 9.0 ant-javamail rhel8-CRB rhel9-AppStream RHEL 9.0 ant-jdepend rhel8-CRB rhel9-AppStream RHEL 9.0 ant-jmf rhel8-CRB rhel9-AppStream RHEL 9.0 ant-jsch rhel8-CRB rhel9-AppStream RHEL 9.0 ant-junit rhel8-CRB rhel9-AppStream RHEL 9.0 ant-lib rhel8-CRB rhel9-AppStream RHEL 9.0 ant-swing rhel8-CRB rhel9-AppStream RHEL 9.0 ant-testutil rhel8-CRB rhel9-AppStream RHEL 9.0 ant-xz rhel8-CRB rhel9-AppStream RHEL 9.0 antlr-tool rhel8-CRB rhel9-AppStream RHEL 9.0 apache-commons-cli rhel8-CRB rhel9-AppStream RHEL 9.0 apache-commons-codec rhel8-CRB rhel9-AppStream RHEL 9.0 apache-commons-collections rhel8-AppStream rhel9-CRB RHEL 9.0 apache-commons-compress rhel8-AppStream rhel9-CRB RHEL 9.0 apache-commons-io rhel8-CRB rhel9-AppStream RHEL 9.0 apache-commons-lang3 rhel8-CRB rhel9-AppStream RHEL 9.0 apache-commons-logging rhel8-CRB rhel9-AppStream RHEL 9.0 apache-commons-net rhel8-CRB rhel9-AppStream RHEL 9.0 aspell rhel8-AppStream rhel9-CRB RHEL 9.0 assertj-core rhel8-CRB rhel9-AppStream RHEL 9.0 atinject rhel8-CRB rhel9-AppStream RHEL 9.0 atlas-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 atlas-z14 rhel8-BaseOS rhel9-AppStream RHEL 9.0 audit-libs-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 augeas rhel8-BaseOS rhel9-AppStream RHEL 9.0 augeas-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 autoconf-archive rhel8-CRB rhel9-AppStream RHEL 9.0 avahi-glib rhel8-BaseOS rhel9-AppStream RHEL 9.0 bcel rhel8-CRB rhel9-AppStream RHEL 9.0 bind-devel rhel8-AppStream rhel9-CRB RHEL 9.0 blktrace rhel8-BaseOS rhel9-AppStream RHEL 9.0 bluez-obexd rhel8-BaseOS rhel9-AppStream RHEL 9.0 boom-boot rhel8-BaseOS rhel9-AppStream RHEL 9.0 boom-boot-conf rhel8-BaseOS rhel9-AppStream RHEL 9.0 boom-boot-grub2 rhel8-BaseOS rhel9-AppStream RHEL 9.0 boost-numpy3 rhel8-CRB rhel9-AppStream RHEL 9.0 boost-python3 rhel8-CRB rhel9-AppStream RHEL 9.0 brotli rhel8-BaseOS rhel9-AppStream RHEL 9.0 bsdtar rhel8-BaseOS rhel9-AppStream RHEL 9.0 bsf rhel8-CRB rhel9-AppStream RHEL 9.0 bzip2-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 c-ares-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 cdi-api rhel8-CRB rhel9-AppStream RHEL 9.0 checkpolicy rhel8-BaseOS rhel9-AppStream RHEL 9.0 conntrack-tools rhel8-BaseOS rhel9-AppStream RHEL 9.0 createrepo_c-devel rhel8-AppStream rhel9-CRB RHEL 9.0 criu-devel rhel8-AppStream rhel9-CRB RHEL 9.0 criu-devel rhel8-AppStream rhel9-CRB RHEL 9.0 cryptsetup-devel rhel8-AppStream rhel9-CRB RHEL 9.0 ctdb rhel8-BaseOS rhel9-ResilientStorage RHEL 9.0 cxl-libs rhel9-AppStream rhel9-BaseOS RHEL 9.5 cyrus-sasl-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 cyrus-sasl-gs2 rhel8-BaseOS rhel9-AppStream RHEL 9.0 cyrus-sasl-ldap rhel8-BaseOS rhel9-AppStream RHEL 9.0 cyrus-sasl-md5 rhel8-BaseOS rhel9-AppStream RHEL 9.0 cyrus-sasl-ntlm rhel8-BaseOS rhel9-AppStream RHEL 9.0 daxctl rhel8-BaseOS rhel9-AppStream RHEL 9.0 dbus-daemon rhel8-BaseOS rhel9-AppStream RHEL 9.0 dbus-glib rhel8-BaseOS rhel9-AppStream RHEL 9.0 dlm-lib rhel8-BaseOS rhel9-AppStream RHEL 9.0 dracut-caps rhel8-BaseOS rhel9-AppStream RHEL 9.0 dracut-live rhel8-BaseOS rhel9-AppStream RHEL 9.0 dtc rhel8-CRB rhel9-AppStream RHEL 9.0 dwarves rhel8-CRB rhel9-AppStream RHEL 9.0 e2fsprogs-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 efivar rhel8-BaseOS rhel9-AppStream RHEL 9.0 elfutils-debuginfod rhel8-BaseOS rhel9-AppStream RHEL 9.0 elfutils-debuginfod-client-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 elfutils-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 elfutils-libelf-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 emacs-filesystem rhel8-BaseOS rhel9-AppStream RHEL 9.0 evolution-data-server-doc rhel8-CRB rhel9-AppStream RHEL 9.0 evolution-data-server-perl rhel8-CRB rhel9-AppStream RHEL 9.0 evolution-data-server-tests rhel8-CRB rhel9-AppStream RHEL 9.0 expat-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 expect rhel8-BaseOS rhel9-AppStream RHEL 9.0 fence-agents-all rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-all rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-amt-ws rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-amt-ws rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-apc rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-apc rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-apc-snmp rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-apc-snmp rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-bladecenter rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-bladecenter rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-brocade rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-brocade rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-cisco-mds rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-cisco-mds rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-cisco-ucs rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-cisco-ucs rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-drac5 rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-drac5 rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-eaton-snmp rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-eaton-snmp rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-emerson rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-emerson rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-eps rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-eps rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-heuristics-ping rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-heuristics-ping rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-hpblade rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-hpblade rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ibmblade rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ibmblade rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ifmib rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ifmib rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ilo-moonshot rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ilo-moonshot rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ilo-mp rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ilo-mp rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ilo-ssh rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ilo-ssh rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ilo2 rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ilo2 rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-intelmodular rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-intelmodular rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ipdu rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ipdu rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-ipmilan rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-ipmilan rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-kdump rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-kdump rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-lpar rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-lpar rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-mpath rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-mpath rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-redfish rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-redfish rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-rhevm rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-rhevm rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-rsa rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-rsa rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-rsb rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-rsb rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-sbd rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-sbd rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-scsi rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-scsi rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-vmware-rest rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-vmware-rest rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-vmware-soap rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-vmware-soap rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-wti rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-wti rhel8-AppStream rhel9-HighAvailability RHEL 9.0 fence-agents-zvm rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 fence-agents-zvm rhel8-AppStream rhel9-HighAvailability RHEL 9.0 flite rhel8-CRB rhel9-AppStream RHEL 9.0 fontconfig rhel8-BaseOS rhel9-AppStream RHEL 9.0 fontconfig-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 freeipmi rhel8-BaseOS rhel9-AppStream RHEL 9.0 freeipmi-bmc-watchdog rhel8-BaseOS rhel9-AppStream RHEL 9.0 freeipmi-ipmidetectd rhel8-BaseOS rhel9-AppStream RHEL 9.0 freeipmi-ipmiseld rhel8-BaseOS rhel9-AppStream RHEL 9.0 freetype-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 fstrm-devel rhel8-AppStream rhel9-CRB RHEL 9.0 fuse-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 fuse3 rhel8-BaseOS rhel9-AppStream RHEL 9.0 fuse3-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 fuse3-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 fxload rhel8-BaseOS rhel9-AppStream RHEL 9.0 galera rhel8-CRB rhel9-AppStream RHEL 9.0 gdbm rhel8-BaseOS rhel9-CRB RHEL 9.0 gdbm-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 gdisk rhel8-BaseOS rhel9-AppStream RHEL 9.0 gdk-pixbuf2 rhel8-BaseOS rhel9-AppStream RHEL 9.0 geoclue2-demos rhel8-AppStream rhel9-CRB RHEL 9.0 gettext-common-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 gettext-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 gfs2-utils rhel8-BaseOS rhel9-ResilientStorage RHEL 9.0 ghostscript-doc rhel8-CRB rhel9-AppStream RHEL 9.0 ghostscript-tools-dvipdf rhel8-CRB rhel9-AppStream RHEL 9.0 glib2-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 glib2-doc rhel8-CRB rhel9-AppStream RHEL 9.0 glib2-tests rhel8-BaseOS rhel9-AppStream RHEL 9.0 glibc-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 glibc-gconv-extra rhel8-AppStream rhel9-BaseOS RHEL 9.0 glibc-headers rhel8-BaseOS rhel9-AppStream RHEL 9.0 glibc-locale-source rhel8-BaseOS rhel9-AppStream RHEL 9.0 glusterfs rhel8-BaseOS rhel9-AppStream RHEL 9.0 glusterfs-client-xlators rhel8-BaseOS rhel9-AppStream RHEL 9.0 glusterfs-fuse rhel8-BaseOS rhel9-AppStream RHEL 9.0 glusterfs-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 glusterfs-rdma rhel8-BaseOS rhel9-AppStream RHEL 9.0 gmp-c++ rhel8-BaseOS rhel9-AppStream RHEL 9.0 gmp-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 gnome-common rhel8-CRB rhel9-AppStream RHEL 9.0 gnu-efi rhel8-CRB rhel9-AppStream RHEL 9.0 gnupg2-smime rhel8-BaseOS rhel9-AppStream RHEL 9.0 gobject-introspection-devel rhel8-AppStream rhel9-CRB RHEL 9.0 google-guice rhel8-CRB rhel9-AppStream RHEL 9.0 google-roboto-slab-fonts rhel8-CRB rhel9-AppStream RHEL 9.0 gperf rhel8-CRB rhel9-AppStream RHEL 9.0 gpgmepp rhel8-BaseOS rhel9-AppStream RHEL 9.0 graphviz-doc rhel8-CRB rhel9-AppStream RHEL 9.0 graphviz-python3 rhel8-CRB rhel9-AppStream RHEL 9.0 groff rhel8-CRB rhel9-AppStream RHEL 9.0 gsl-devel rhel8-AppStream rhel9-CRB RHEL 9.0 gsl-devel rhel9-CRB rhel9-AppStream RHEL 9.1 gtkspell3 rhel8-AppStream rhel9-CRB RHEL 9.0 hamcrest rhel8-CRB rhel9-AppStream RHEL 9.0 hivex rhel8-CRB rhel9-AppStream RHEL 9.0 hivex-devel rhel8-AppStream rhel9-CRB RHEL 9.0 httpcomponents-client rhel8-CRB rhel9-AppStream RHEL 9.0 httpcomponents-core rhel8-CRB rhel9-AppStream RHEL 9.0 hwloc-devel rhel8-CRB rhel9-AppStream RHEL 9.0 hyphen-devel rhel8-CRB rhel9-AppStream RHEL 9.0 icu rhel8-BaseOS rhel9-AppStream RHEL 9.0 infiniband-diags rhel8-BaseOS rhel9-AppStream RHEL 9.0 ipset-service rhel8-BaseOS rhel9-AppStream RHEL 9.0 iptables-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 iputils-ninfod rhel8-BaseOS rhel9-AppStream RHEL 9.0 jakarta-oro rhel8-CRB rhel9-AppStream RHEL 9.0 jansi rhel8-CRB rhel9-AppStream RHEL 9.0 jansson-devel rhel8-AppStream rhel9-CRB RHEL 9.0 javapackages-filesystem rhel8-CRB rhel9-AppStream RHEL 9.0 javapackages-tools rhel8-CRB rhel9-AppStream RHEL 9.0 jcl-over-slf4j rhel8-CRB rhel9-AppStream RHEL 9.0 jdepend rhel8-CRB rhel9-AppStream RHEL 9.0 jmc-core rhel9-AppStream rhel9-CRB RHEL 9.2 jq rhel9-AppStream rhel9-BaseOS RHEL 9.4 jsch rhel8-CRB rhel9-AppStream RHEL 9.0 json-c-devel rhel8-AppStream rhel9-CRB RHEL 9.0 jsoup rhel8-CRB rhel9-AppStream RHEL 9.0 jsr-305 rhel8-CRB rhel9-AppStream RHEL 9.0 Judy rhel8-CRB rhel9-AppStream RHEL 9.0 junit rhel8-CRB rhel9-AppStream RHEL 9.0 jzlib rhel8-CRB rhel9-AppStream RHEL 9.0 kabi-dw rhel8-BaseOS rhel9-AppStream RHEL 9.0 kbd-legacy rhel8-BaseOS rhel9-AppStream RHEL 9.0 kbd-legacy rhel9-AppStream rhel9-BaseOS RHEL 9.3 kernel-cross-headers rhel8-BaseOS rhel9-CRB RHEL 9.0 kernel-debug-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 kernel-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 kernel-doc rhel8-BaseOS rhel9-AppStream RHEL 9.0 kernel-headers rhel8-BaseOS rhel9-AppStream RHEL 9.0 kernel-zfcpdump-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 keyutils-libs-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 krb5-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 ksc rhel8-BaseOS rhel9-CRB RHEL 9.0 lcms2-devel rhel8-CRB rhel9-AppStream RHEL 9.0 libacl-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libaio-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libappstream-glib rhel8-BaseOS rhel9-AppStream RHEL 9.0 libasan rhel8-BaseOS rhel9-AppStream RHEL 9.0 libatomic_ops rhel8-AppStream rhel9-CRB RHEL 9.0 libattr-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libbabeltrace rhel8-BaseOS rhel9-AppStream RHEL 9.0 libblkid-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libcap-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libcap-ng-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libcap-ng-python3 rhel8-BaseOS rhel9-AppStream RHEL 9.0 libcom_err-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libcurl-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libdatrie-devel rhel8-CRB rhel9-AppStream RHEL 9.0 libdb-utils rhel8-BaseOS rhel9-AppStream RHEL 9.0 libdwarves1 rhel8-CRB rhel9-AppStream RHEL 9.0 libedit-devel rhel8-CRB rhel9-AppStream RHEL 9.0 liberation-fonts rhel8-BaseOS rhel9-AppStream RHEL 9.0 liberation-fonts-common rhel8-BaseOS rhel9-AppStream RHEL 9.0 liberation-mono-fonts rhel8-BaseOS rhel9-AppStream RHEL 9.0 liberation-narrow-fonts rhel8-BaseOS rhel9-AppStream RHEL 9.0 liberation-sans-fonts rhel8-BaseOS rhel9-AppStream RHEL 9.0 liberation-serif-fonts rhel8-BaseOS rhel9-AppStream RHEL 9.0 libev rhel8-AppStream rhel9-BaseOS RHEL 9.0 libevent-doc rhel8-BaseOS rhel9-AppStream RHEL 9.0 libfabric rhel8-BaseOS rhel9-AppStream RHEL 9.0 libfdisk-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libffi-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libgcrypt-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libgomp-offload-nvptx rhel8-BaseOS rhel9-AppStream RHEL 9.0 libgpg-error-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libgudev-devel rhel9-CRB rhel9-AppStream RHEL 9.3 libguestfs-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libguestfs-gobject rhel8-AppStream rhel9-CRB RHEL 9.0 libguestfs-gobject-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libguestfs-man-pages-ja rhel8-AppStream rhel9-CRB RHEL 9.0 libguestfs-man-pages-uk rhel8-AppStream rhel9-CRB RHEL 9.0 libguestfs-winsupport rhel8-CRB rhel9-AppStream RHEL 9.0 libica-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libical rhel8-BaseOS rhel9-AppStream RHEL 9.0 libicu-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libiscsi rhel8-CRB rhel9-AppStream RHEL 9.0 libiscsi-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libiscsi-utils rhel8-CRB rhel9-AppStream RHEL 9.0 libitm rhel8-BaseOS rhel9-AppStream RHEL 9.0 libjose-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libkeepalive rhel8-BaseOS rhel9-AppStream RHEL 9.0 libldb-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 liblockfile rhel9-AppStream rhel9-BaseOS RHEL 9.1 liblsan rhel8-BaseOS rhel9-AppStream RHEL 9.0 libluksmeta-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libmaxminddb-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libmicrohttpd rhel8-BaseOS rhel9-AppStream RHEL 9.0 libmng-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libmount-devel rhel8-CRB rhel9-AppStream RHEL 9.0 libnbd rhel8-CRB rhel9-AppStream RHEL 9.0 libnbd-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libnetfilter_cthelper rhel8-BaseOS rhel9-AppStream RHEL 9.0 libnetfilter_cttimeout rhel8-BaseOS rhel9-AppStream RHEL 9.0 libnetfilter_queue rhel8-BaseOS rhel9-AppStream RHEL 9.0 libnl3-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libnsl2 rhel8-BaseOS rhel9-AppStream RHEL 9.0 libocxl rhel8-BaseOS rhel9-AppStream RHEL 9.0 libogg-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libpmem-debug rhel8-CRB rhel9-AppStream RHEL 9.0 libpmemblk-debug rhel8-CRB rhel9-AppStream RHEL 9.0 libpmemlog-debug rhel8-CRB rhel9-AppStream RHEL 9.0 libpmemobj-debug rhel8-CRB rhel9-AppStream RHEL 9.0 libpmempool-debug rhel8-CRB rhel9-AppStream RHEL 9.0 libpng-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libpsl-devel rhel8-CRB rhel9-AppStream RHEL 9.0 libpsm2 rhel8-BaseOS rhel9-AppStream RHEL 9.0 libqb rhel8-BaseOS rhel9-AppStream RHEL 9.0 libqb-devel rhel8-BaseOS rhel9-ResilientStorage RHEL 9.0 libqb-devel rhel8-BaseOS rhel9-HighAvailability RHEL 9.0 librabbitmq rhel8-BaseOS rhel9-AppStream RHEL 9.0 librtas-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libsecret rhel8-BaseOS rhel9-AppStream RHEL 9.0 libsecret-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libselinux-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libsepol-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libservicelog-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libslirp-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libslirp-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libslirp-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libsoup rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstemmer rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstoragemgmt rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstoragemgmt-arcconf-plugin rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstoragemgmt-hpsa-plugin rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstoragemgmt-local-plugin rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstoragemgmt-megaraid-plugin rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstoragemgmt-smis-plugin rhel8-BaseOS rhel9-AppStream RHEL 9.0 libstoragemgmt-udev rhel8-BaseOS rhel9-AppStream RHEL 9.0 libtalloc-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libtdb-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libtevent-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libthai-devel rhel8-CRB rhel9-AppStream RHEL 9.0 libtirpc-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 libtool-ltdl rhel8-BaseOS rhel9-AppStream RHEL 9.0 libtool-ltdl-devel rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 libtool-ltdl-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libtool-ltdl-devel rhel8-AppStream rhel9-HighAvailability RHEL 9.0 libtsan rhel8-BaseOS rhel9-AppStream RHEL 9.0 libubsan rhel8-BaseOS rhel9-AppStream RHEL 9.0 liburing rhel8-BaseOS rhel9-AppStream RHEL 9.0 libusb rhel8-BaseOS rhel9-AppStream RHEL 9.0 libusbx-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libuuid-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libverto-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libvirt rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-client rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-client-qemu rhel9-CRB rhel9-AppStream RHEL 9.4 libvirt-daemon rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-config-network rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-config-nwfilter rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-interface rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-network rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-nodedev rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-nwfilter rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-secret rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-storage rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-storage-core rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-storage-disk rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-storage-iscsi rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-storage-logical rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-storage-mpath rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-daemon-driver-storage-scsi rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-dbus rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libvirt-docs rhel8-AppStream rhel9-CRB RHEL 9.0 libvirt-libs rhel8-CRB rhel9-AppStream RHEL 9.0 libvirt-lock-sanlock rhel8-AppStream rhel9-CRB RHEL 9.0 libvirt-nss rhel8-CRB rhel9-AppStream RHEL 9.0 libwinpr-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libxcrypt-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 libxslt rhel8-BaseOS rhel9-AppStream RHEL 9.0 libXxf86vm-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libzfcphbaapi-docs rhel8-BaseOS rhel9-AppStream RHEL 9.0 libzip-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libzip-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libzip-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libzip-devel rhel8-AppStream rhel9-CRB RHEL 9.0 libzstd-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 lksctp-tools-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 lksctp-tools-doc rhel8-BaseOS rhel9-AppStream RHEL 9.0 lm_sensors rhel8-BaseOS rhel9-AppStream RHEL 9.0 lm_sensors-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 lm_sensors-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 logwatch rhel8-BaseOS rhel9-AppStream RHEL 9.0 lua-guestfs rhel8-AppStream rhel9-CRB RHEL 9.0 lua-posix rhel8-CRB rhel9-AppStream RHEL 9.0 lvm2-dbusd rhel8-BaseOS rhel9-AppStream RHEL 9.0 lvm2-lockd rhel8-BaseOS rhel9-AppStream RHEL 9.0 lynx rhel8-CRB rhel9-AppStream RHEL 9.0 lz4-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 lzo-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 lzo-minilzo rhel8-BaseOS rhel9-AppStream RHEL 9.0 m4 rhel8-BaseOS rhel9-AppStream RHEL 9.0 mariadb rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-backup rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-common rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-devel rhel8-AppStream rhel9-CRB RHEL 9.0 mariadb-devel rhel8-AppStream rhel9-CRB RHEL 9.0 mariadb-embedded rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-embedded-devel rhel8-AppStream rhel9-CRB RHEL 9.0 mariadb-embedded-devel rhel8-AppStream rhel9-CRB RHEL 9.0 mariadb-errmsg rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-gssapi-server rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-oqgraph-engine rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-server rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-server-galera rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-server-utils rhel8-CRB rhel9-AppStream RHEL 9.0 mariadb-test rhel8-AppStream rhel9-CRB RHEL 9.0 mariadb-test rhel8-AppStream rhel9-CRB RHEL 9.0 maven rhel8-CRB rhel9-AppStream RHEL 9.0 maven-lib rhel8-CRB rhel9-AppStream RHEL 9.0 maven-resolver rhel8-CRB rhel9-AppStream RHEL 9.0 maven-shared-utils rhel8-CRB rhel9-AppStream RHEL 9.0 maven-wagon rhel8-CRB rhel9-AppStream RHEL 9.0 memstrack rhel8-BaseOS rhel9-AppStream RHEL 9.0 memtest86+ rhel8-BaseOS rhel9-AppStream RHEL 9.0 mesa-libgbm-devel rhel9-CRB rhel9-AppStream RHEL 9.3 mesa-libOSMesa rhel8-AppStream rhel9-CRB RHEL 9.0 mobile-broadband-provider-info rhel8-BaseOS rhel9-AppStream RHEL 9.0 multilib-rpm-config rhel8-AppStream rhel9-CRB RHEL 9.0 mvapich2-psm2-devel rhel8-AppStream rhel9-CRB RHEL 9.0 mysql-devel rhel8-AppStream rhel9-CRB RHEL 9.0 mysql-libs rhel8-AppStream rhel9-CRB RHEL 9.0 mysql-test rhel8-AppStream rhel9-CRB RHEL 9.0 nbdfuse rhel8-CRB rhel9-AppStream RHEL 9.0 nbdkit-devel rhel8-AppStream rhel9-CRB RHEL 9.0 nbdkit-example-plugins rhel8-AppStream rhel9-CRB RHEL 9.0 ncurses-c++-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 ncurses-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 ncurses-term rhel8-BaseOS rhel9-AppStream RHEL 9.0 net-snmp-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 NetworkManager-config-connectivity-redhat rhel8-BaseOS rhel9-AppStream RHEL 9.0 NetworkManager-dispatcher-routing-rules rhel8-BaseOS rhel9-AppStream RHEL 9.0 NetworkManager-ovs rhel8-BaseOS rhel9-AppStream RHEL 9.0 NetworkManager-ppp rhel8-BaseOS rhel9-AppStream RHEL 9.0 nginx-mod-devel rhel8-AppStream rhel9-CRB RHEL 9.0 nispor-devel rhel8-AppStream rhel9-CRB RHEL 9.0 nss_db rhel8-BaseOS rhel9-CRB RHEL 9.0 ntsysv rhel8-BaseOS rhel9-AppStream RHEL 9.0 numactl-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 oniguruma rhel9-AppStream rhel9-BaseOS RHEL 9.4 objectweb-asm rhel8-CRB rhel9-AppStream RHEL 9.0 opa-address-resolution rhel8-BaseOS rhel9-AppStream RHEL 9.0 opa-basic-tools rhel8-BaseOS rhel9-AppStream RHEL 9.0 opa-fastfabric rhel8-BaseOS rhel9-AppStream RHEL 9.0 opa-fm rhel8-BaseOS rhel9-AppStream RHEL 9.0 opa-libopamgt rhel8-BaseOS rhel9-AppStream RHEL 9.0 opal-firmware rhel8-BaseOS rhel9-AppStream RHEL 9.0 opal-utils rhel8-BaseOS rhel9-AppStream RHEL 9.0 openblas-openmp rhel8-CRB rhel9-AppStream RHEL 9.0 openblas-threads rhel8-AppStream rhel9-CRB RHEL 9.0 opencl-headers rhel8-CRB rhel9-AppStream RHEL 9.0 opencsd rhel8-BaseOS rhel9-AppStream RHEL 9.0 OpenIPMI rhel8-BaseOS rhel9-AppStream RHEL 9.0 OpenIPMI-lanserv rhel8-BaseOS rhel9-AppStream RHEL 9.0 OpenIPMI-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 openldap-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 openssl-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 openssl-perl rhel8-BaseOS rhel9-AppStream RHEL 9.0 openwsman-client rhel8-AppStream rhel9-CRB RHEL 9.0 openwsman-python3 rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 openwsman-python3 rhel8-AppStream rhel9-HighAvailability RHEL 9.0 opus-devel rhel8-AppStream rhel9-CRB RHEL 9.0 ostree-devel rhel8-AppStream rhel9-CRB RHEL 9.0 owasp-java-encoder rhel9-AppStream rhel9-CRB RHEL 9.2 p11-kit-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 p11-kit-server rhel8-BaseOS rhel9-AppStream RHEL 9.0 pacemaker-cluster-libs rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 pacemaker-cluster-libs rhel8-AppStream rhel9-HighAvailability RHEL 9.0 pacemaker-libs rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 pacemaker-libs rhel8-AppStream rhel9-HighAvailability RHEL 9.0 pacemaker-schemas rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 pacemaker-schemas rhel8-AppStream rhel9-HighAvailability RHEL 9.0 pam-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 pam_cifscreds rhel8-BaseOS rhel9-AppStream RHEL 9.0 pam_ssh_agent_auth rhel8-BaseOS rhel9-AppStream RHEL 9.0 patch rhel8-BaseOS rhel9-AppStream RHEL 9.0 pciutils-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 pcre-cpp rhel8-BaseOS rhel9-AppStream RHEL 9.0 pcre-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 pcre-utf16 rhel8-BaseOS rhel9-AppStream RHEL 9.0 pcre-utf32 rhel8-BaseOS rhel9-AppStream RHEL 9.0 pcre2-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 pcre2-utf16 rhel8-BaseOS rhel9-AppStream RHEL 9.0 pcre2-utf32 rhel8-BaseOS rhel9-AppStream RHEL 9.0 perf rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Algorithm-Diff rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Archive-Tar rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Carp rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Clone rhel8-CRB rhel9-AppStream RHEL 9.0 perl-Compress-Raw-Bzip2 rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Compress-Raw-Zlib rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-constant rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Data-Dumper rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Date-Manip rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-DBD-SQLite rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-DBI rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Digest-SHA1 rhel8-CRB rhel9-AppStream RHEL 9.0 perl-Errno rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Exporter rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Exporter-Tiny rhel8-CRB rhel9-AppStream RHEL 9.0 perl-File-Path rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-File-Temp rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Getopt-Long rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-hivex rhel8-CRB rhel9-AppStream RHEL 9.0 perl-HTTP-Tiny rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Importer rhel8-CRB rhel9-AppStream RHEL 9.0 perl-interpreter rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-IO rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-IO-Compress rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-IO-String rhel8-AppStream rhel9-CRB RHEL 9.0 perl-IO-Zlib rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-libs rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-List-MoreUtils rhel8-CRB rhel9-AppStream RHEL 9.0 perl-List-MoreUtils-XS rhel8-CRB rhel9-AppStream RHEL 9.0 perl-macros rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Math-Complex rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-MIME-Base64 rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-MIME-Charset rhel8-CRB rhel9-AppStream RHEL 9.0 perl-Module-Pluggable rhel8-AppStream rhel9-CRB RHEL 9.0 perl-Module-Runtime rhel8-AppStream rhel9-CRB RHEL 9.0 perl-parent rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Parse-Yapp rhel8-BaseOS rhel9-CRB RHEL 9.0 perl-PathTools rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Pod-Escapes rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Pod-Parser rhel8-AppStream rhel9-CRB RHEL 9.0 perl-Pod-Parser rhel8-AppStream rhel9-CRB RHEL 9.0 perl-Pod-Parser rhel8-AppStream rhel9-CRB RHEL 9.0 perl-Pod-Parser rhel8-AppStream rhel9-CRB RHEL 9.0 perl-Pod-Perldoc rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Pod-Simple rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Pod-Usage rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-podlators rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Scalar-List-Utils rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Socket rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Storable rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Sys-CPU rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Sys-MemInfo rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Term-ANSIColor rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Term-Cap rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Term-Size-Any rhel8-CRB rhel9-AppStream RHEL 9.0 perl-Term-Size-Perl rhel8-CRB rhel9-AppStream RHEL 9.0 perl-Term-Table rhel8-CRB rhel9-AppStream RHEL 9.0 perl-Text-Diff rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Text-ParseWords rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Text-Tabs+Wrap rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-threads rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-threads-shared rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Time-Local rhel8-BaseOS rhel9-AppStream RHEL 9.0 perl-Unicode-LineBreak rhel8-CRB rhel9-AppStream RHEL 9.0 perl-Unicode-Normalize rhel8-BaseOS rhel9-AppStream RHEL 9.0 plexus-cipher rhel8-CRB rhel9-AppStream RHEL 9.0 plexus-classworlds rhel8-CRB rhel9-AppStream RHEL 9.0 plexus-containers-component-annotations rhel8-CRB rhel9-AppStream RHEL 9.0 plexus-interpolation rhel8-CRB rhel9-AppStream RHEL 9.0 plexus-sec-dispatcher rhel8-CRB rhel9-AppStream RHEL 9.0 plexus-utils rhel8-CRB rhel9-AppStream RHEL 9.0 plotutils rhel8-CRB rhel9-AppStream RHEL 9.0 pmix-devel rhel8-CRB rhel9-AppStream RHEL 9.0 policycoreutils-dbus rhel8-BaseOS rhel9-AppStream RHEL 9.0 policycoreutils-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 policycoreutils-python-utils rhel8-BaseOS rhel9-AppStream RHEL 9.0 polkit-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 polkit-docs rhel8-BaseOS rhel9-AppStream RHEL 9.0 poppler-cpp rhel8-CRB rhel9-AppStream RHEL 9.0 poppler-qt5 rhel9-CRB rhel9-AppStream RHEL 9.1 popt-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 postfix rhel8-BaseOS rhel9-AppStream RHEL 9.0 postgresql-server-devel rhel8-AppStream rhel9-CRB RHEL 9.0 postgresql-server-devel rhel8-AppStream rhel9-CRB RHEL 9.0 postgresql-server-devel rhel8-AppStream rhel9-CRB RHEL 9.0 postgresql-server-devel rhel8-AppStream rhel9-CRB RHEL 9.0 postgresql-test rhel8-AppStream rhel9-CRB RHEL 9.0 postgresql-test rhel8-AppStream rhel9-CRB RHEL 9.0 postgresql-test rhel8-AppStream rhel9-CRB RHEL 9.0 postgresql-test rhel8-AppStream rhel9-CRB RHEL 9.0 powerpc-utils rhel8-BaseOS rhel9-AppStream RHEL 9.0 ppc64-diag rhel8-BaseOS rhel9-AppStream RHEL 9.0 protobuf-c rhel8-AppStream rhel9-BaseOS RHEL 9.0 protobuf-c-compiler rhel8-AppStream rhel9-CRB RHEL 9.0 protobuf-c-devel rhel8-AppStream rhel9-CRB RHEL 9.0 protobuf-compiler rhel8-AppStream rhel9-CRB RHEL 9.0 ps_mem rhel8-BaseOS rhel9-AppStream RHEL 9.0 publicsuffix-list rhel8-BaseOS rhel9-AppStream RHEL 9.0 python-cups-doc rhel8-CRB rhel9-AppStream RHEL 9.0 python3-audit rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-boom rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-cffi rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-cffi rhel9-AppStream rhel9-BaseOS RHEL 9.2 python3-configobj rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-cryptography rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-cryptography rhel9-AppStream rhel9-BaseOS RHEL 9.2 python3-docutils rhel8-AppStream rhel9-CRB RHEL 9.0 python3-gobject-base rhel8-AppStream rhel9-CRB RHEL 9.0 python3-hivex rhel8-AppStream rhel9-CRB RHEL 9.0 python3-idle rhel8-AppStream rhel9-CRB RHEL 9.0 python3-iniconfig rhel9-CRB rhel9-AppStream RHEL 9.2 python3-ipatests rhel8-AppStream rhel9-CRB RHEL 9.0 python3-iscsi-initiator-utils rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-libnbd rhel8-CRB rhel9-AppStream RHEL 9.0 python3-libproxy rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-libselinux rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-libsemanage rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-libstoragemgmt rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-libvirt rhel8-CRB rhel9-AppStream RHEL 9.0 python3-markdown rhel9-CRB rhel9-BaseOS RHEL 9.4 python3-oauthlib rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-packaging rhel8-CRB rhel9-AppStream RHEL 9.0 python3-pexpect rhel8-AppStream rhel9-BaseOS RHEL 9.0 python3-pluggy rhel8-AppStream rhel9-CRB RHEL 9.0 python3-pluggy rhel9-CRB rhel9-AppStream RHEL 9.2 python3-ply rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-ply rhel9-AppStream rhel9-BaseOS RHEL 9.2 python3-policycoreutils rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-ptyprocess rhel8-AppStream rhel9-BaseOS RHEL 9.0 python3-pwquality rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-py rhel8-AppStream rhel9-CRB RHEL 9.0 python3-py rhel9-CRB rhel9-AppStream RHEL 9.2 python3-pycparser rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-pycparser rhel9-AppStream rhel9-BaseOS RHEL 9.2 python3-pygments rhel8-AppStream rhel9-CRB RHEL 9.0 python3-pytest rhel8-AppStream rhel9-CRB RHEL 9.0 python3-pytest rhel9-CRB rhel9-AppStream RHEL 9.2 python3-pyverbs rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-pywbem rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-requests-oauthlib rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-rtslib rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-ruamel-yaml rhel9-CRB rhel9-AppStream RHEL 9.4 python3-ruamel-yaml-clib rhel9-CRB rhel9-AppStream RHEL 9.4 python3-solv rhel8-BaseOS rhel9-AppStream RHEL 9.0 python3-test rhel8-BaseOS rhel9-CRB RHEL 9.0 python3-test rhel8-AppStream rhel9-CRB RHEL 9.0 python3-wcwidth rhel9-CRB rhel9-AppStream RHEL 9.1 python3-wheel rhel8-AppStream rhel9-CRB RHEL 9.0 python3-wheel-wheel rhel8-AppStream rhel9-CRB RHEL 9.0 qclib rhel8-BaseOS rhel9-AppStream RHEL 9.0 qclib-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 qgpgme rhel8-AppStream rhel9-CRB RHEL 9.0 qt5-qtquickcontrols2-devel rhel8-CRB rhel9-AppStream RHEL 9.0 qt5-qtserialbus-devel rhel8-CRB rhel9-AppStream RHEL 9.0 qt5-qtwayland-devel rhel8-CRB rhel9-AppStream RHEL 9.0 quota-doc rhel8-BaseOS rhel9-AppStream RHEL 9.0 quota-nld rhel8-BaseOS rhel9-AppStream RHEL 9.0 quota-rpc rhel8-BaseOS rhel9-AppStream RHEL 9.0 quota-warnquota rhel8-BaseOS rhel9-AppStream RHEL 9.0 rasdaemon rhel8-BaseOS rhel9-AppStream RHEL 9.0 rdma-core-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 readline-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 redhat-indexhtml rhel8-BaseOS rhel9-AppStream RHEL 9.0 redhat-logos rhel8-BaseOS rhel9-AppStream RHEL 9.0 redhat-logos-httpd rhel8-BaseOS rhel9-AppStream RHEL 9.0 regexp rhel8-CRB rhel9-AppStream RHEL 9.0 rpcgen rhel8-CRB rhel9-AppStream RHEL 9.0 rpm-apidocs rhel8-BaseOS rhel9-AppStream RHEL 9.0 rpm-cron rhel8-BaseOS rhel9-AppStream RHEL 9.0 rpm-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 rpm-plugin-ima rhel8-BaseOS rhel9-AppStream RHEL 9.0 rpm-plugin-syslog rhel8-BaseOS rhel9-AppStream RHEL 9.0 rpm-plugin-systemd-inhibit rhel8-BaseOS rhel9-AppStream RHEL 9.0 rsync-daemon rhel8-BaseOS rhel9-AppStream RHEL 9.0 ruby-doc rhel8-AppStream rhel9-CRB RHEL 9.0 ruby-doc rhel8-AppStream rhel9-CRB RHEL 9.0 ruby-doc rhel8-AppStream rhel9-CRB RHEL 9.0 ruby-doc rhel8-AppStream rhel9-CRB RHEL 9.0 ruby-hivex rhel8-AppStream rhel9-CRB RHEL 9.0 ruby-libguestfs rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-mysql2-doc rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-mysql2-doc rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-mysql2-doc rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-mysql2-doc rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-pg-doc rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-pg-doc rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-pg-doc rhel8-AppStream rhel9-CRB RHEL 9.0 rubygem-pg-doc rhel8-AppStream rhel9-CRB RHEL 9.0 s390utils-base rhel8-BaseOS rhel9-AppStream RHEL 9.0 samba-client rhel8-BaseOS rhel9-AppStream RHEL 9.0 samba-krb5-printing rhel8-BaseOS rhel9-AppStream RHEL 9.0 samba-pidl rhel8-BaseOS rhel9-CRB RHEL 9.0 samba-test rhel8-BaseOS rhel9-CRB RHEL 9.0 samba-test-libs rhel8-BaseOS rhel9-CRB RHEL 9.0 samba-winbind-clients rhel8-BaseOS rhel9-AppStream RHEL 9.0 samba-winbind-krb5-locator rhel8-BaseOS rhel9-AppStream RHEL 9.0 samba-winexe rhel8-BaseOS rhel9-AppStream RHEL 9.0 sbd rhel8-AppStream rhel9-ResilientStorage RHEL 9.0 sbd rhel8-AppStream rhel9-HighAvailability RHEL 9.0 SDL2 rhel8-CRB rhel9-AppStream RHEL 9.0 SDL2-devel rhel8-CRB rhel9-AppStream RHEL 9.0 selinux-policy-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 sendmail-milter rhel8-AppStream rhel9-CRB RHEL 9.0 sgabios rhel8-CRB rhel9-AppStream RHEL 9.0 sgml-common rhel8-BaseOS rhel9-AppStream RHEL 9.0 sgpio rhel8-BaseOS rhel9-AppStream RHEL 9.0 shim-unsigned-aarch64 rhel8-CRB rhel9-AppStream RHEL 9.0 slf4j rhel8-CRB rhel9-AppStream RHEL 9.0 slf4j-jdk14 rhel8-CRB rhel9-AppStream RHEL 9.0 smc-tools rhel8-BaseOS rhel9-AppStream RHEL 9.0 snappy-devel rhel9-CRB rhel9-AppStream RHEL 9.5 sombok rhel8-CRB rhel9-AppStream RHEL 9.0 speech-dispatcher-doc rhel8-CRB rhel9-AppStream RHEL 9.0 spice-protocol rhel8-AppStream rhel9-CRB RHEL 9.0 sqlite rhel8-BaseOS rhel9-AppStream RHEL 9.0 sqlite-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 supermin-devel rhel8-AppStream rhel9-CRB RHEL 9.0 swig rhel8-AppStream rhel9-CRB RHEL 9.0 swig rhel8-AppStream rhel9-CRB RHEL 9.0 swig-doc rhel8-AppStream rhel9-CRB RHEL 9.0 swig-doc rhel8-AppStream rhel9-CRB RHEL 9.0 swig-gdb rhel8-AppStream rhel9-CRB RHEL 9.0 swig-gdb rhel8-AppStream rhel9-CRB RHEL 9.0 syslinux-tftpboot rhel8-BaseOS rhel9-AppStream RHEL 9.0 systemd-boot-unsigned rhel9-CRB rhel9-AppStream RHEL 9.5 systemd-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 systemd-journal-remote rhel8-BaseOS rhel9-AppStream RHEL 9.0 target-restore rhel8-BaseOS rhel9-AppStream RHEL 9.0 tcl rhel8-AppStream rhel9-CRB RHEL 9.0 tcl-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 tcl-doc rhel8-BaseOS rhel9-AppStream RHEL 9.0 tix rhel8-AppStream rhel9-CRB RHEL 9.0 tmpwatch rhel8-BaseOS rhel9-AppStream RHEL 9.0 tpm2-abrmd rhel8-BaseOS rhel9-AppStream RHEL 9.0 tpm2-abrmd-selinux rhel8-BaseOS rhel9-AppStream RHEL 9.0 tpm2-tss-devel rhel8-BaseOS rhel9-CRB RHEL 9.0 tuned-profiles-atomic rhel8-BaseOS rhel9-AppStream RHEL 9.0 tuned-profiles-mssql rhel8-BaseOS rhel9-AppStream RHEL 9.0 tuned-profiles-oracle rhel8-BaseOS rhel9-AppStream RHEL 9.0 turbojpeg rhel8-AppStream rhel9-CRB RHEL 9.0 unixODBC-devel rhel8-AppStream rhel9-CRB RHEL 9.0 usbredir-devel rhel8-AppStream rhel9-CRB RHEL 9.0 uuidd rhel8-BaseOS rhel9-AppStream RHEL 9.0 varnish-devel rhel8-AppStream rhel9-CRB RHEL 9.0 velocity rhel8-AppStream rhel9-CRB RHEL 9.0 vhostmd rhel8-AppStream rhel9-SAP-Solutions RHEL 9.0 vhostmd rhel8-AppStream rhel9-SAP-NetWeaver RHEL 9.0 vim-filesystem rhel8-AppStream rhel9-BaseOS RHEL 9.0 virt-v2v-man-pages-ja rhel8-AppStream rhel9-CRB RHEL 9.0 virt-v2v-man-pages-uk rhel8-AppStream rhel9-CRB RHEL 9.0 vm-dump-metrics rhel8-BaseOS rhel9-SAP-Solutions RHEL 9.0 vm-dump-metrics rhel8-BaseOS rhel9-SAP-NetWeaver RHEL 9.0 volume_key-devel rhel8-AppStream rhel9-CRB RHEL 9.0 watchdog rhel8-BaseOS rhel9-AppStream RHEL 9.0 web-assets-filesystem rhel8-CRB rhel9-AppStream RHEL 9.0 xalan-j2 rhel8-CRB rhel9-AppStream RHEL 9.0 xcb-util-image-devel rhel9-CRB rhel9-AppStream RHEL 9.4 xcb-util-renderutil-devel rhel9-CRB rhel9-AppStream RHEL 9.4 xerces-j2 rhel8-CRB rhel9-AppStream RHEL 9.0 xfsprogs-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 xhtml1-dtds rhel8-CRB rhel9-AppStream RHEL 9.0 xml-common rhel8-BaseOS rhel9-AppStream RHEL 9.0 xml-commons-apis rhel8-CRB rhel9-AppStream RHEL 9.0 xml-commons-resolver rhel8-CRB rhel9-AppStream RHEL 9.0 xmlrpc-c rhel8-BaseOS rhel9-AppStream RHEL 9.0 xmlrpc-c-client rhel8-BaseOS rhel9-AppStream RHEL 9.0 xorg-x11-drv-evdev-devel rhel8-AppStream rhel9-CRB RHEL 9.0 xz-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 xz-java rhel8-CRB rhel9-AppStream RHEL 9.0 xz-lzma-compat rhel8-CRB rhel9-AppStream RHEL 9.0 zlib-devel rhel8-BaseOS rhel9-AppStream RHEL 9.0 zstd rhel8-AppStream rhel9-BaseOS RHEL 9.0 *This table uses abbreviated names for the repository ID. Use the following examples to help identify the full repository ID, where <arch> is the specific architecture: rhel9-BaseOS: rhel-9-for- <arch> -baseos-rpms, rhel-9-for- <arch> -baseos-eus-rpms, rhel-9-for- <arch> -baseos-e4s-rpms. rhel9-AppStream: rhel-9-for- <arch> -appstream-rpms, rhel-9-for- <arch> -appstream-eus-rpms, rhel-9-for- <arch> -appstream-e4s-rpms. rhel9-CRB: codeready-builder-for-rhel-9- <arch> -rpms, codeready-builder-for-rhel-9- <arch> -eus-rpms. rhel9-SAP-Solutions: rhel-9-for- <arch> -sap-solutions-rpms, rhel-9-for- <arch> -sap-solutions-eus-rpms, rhel-9-for- <arch> -sap-solutions-e4s-rpms. rhel9-SAP-NetWeaver: rhel-9-for- <arch> -sap-netweaver-rpms, rhel-9-for- <arch> -sap-netweaver-eus-rpms, rhel-9-for- <arch> -sap-netweaver-e4s-rpms. A.4. Removed packages The following packages are part of RHEL 8 but are not distributed with RHEL 9: Package Note abrt abrt-addon-ccpp abrt-addon-kerneloops abrt-addon-pstoreoops abrt-addon-vmcore abrt-addon-xorg abrt-cli abrt-console-notification abrt-dbus abrt-desktop abrt-gui abrt-gui-libs abrt-libs abrt-tui adobe-source-sans-pro-fonts-3.02803.el9.noarch.rpm alsa-plugins-pulseaudio alsa-sof-firmware-debug amanda amanda-client amanda-libs amanda-server ant-apache-log4j ant-contrib ant-contrib-javadoc ant-javadoc ant-manual antlr-C++ antlr-javadoc antlr-manual antlr3 antlr32 aopalliance aopalliance aopalliance-javadoc apache-commons-beanutils-javadoc apache-commons-cli-javadoc apache-commons-codec-javadoc apache-commons-collections-javadoc apache-commons-collections-testframework apache-commons-compress-javadoc apache-commons-exec apache-commons-exec-javadoc apache-commons-io-javadoc apache-commons-jxpath apache-commons-jxpath apache-commons-jxpath-javadoc apache-commons-lang-javadoc apache-commons-lang3-javadoc apache-commons-logging-javadoc apache-commons-net-javadoc apache-commons-parent apache-ivy apache-ivy-javadoc apache-parent apache-resource-bundles apache-sshd apiguardian aqute-bnd-javadoc arpwatch aspnetcore-runtime-3.0 aspnetcore-runtime-3.1 aspnetcore-runtime-5.0 aspnetcore-targeting-pack-3.0 aspnetcore-targeting-pack-3.1 aspnetcore-targeting-pack-5.0 assertj-core-javadoc atinject-javadoc atinject-tck authd auto autoconf213 autogen autogen-libopts autogen-libopts-devel avahi-ui avahi-ui-devel avahi-ui-gtk3 awscli base64coder bash-doc batik batik-css batik-util bcel-javadoc bea-stax bea-stax-api beust-jcommander-javadoc bind-export-devel bind-export-libs bind-pkcs11 Instead of the named-pkcs11 service, append -E pkcs11 to named.service . Use pkcs11-tool from the opensc package to manage pkcs11 tokens or stored keys. bind-pkcs11-devel bind-pkcs11-libs bind-pkcs11-utils bind-sdb bind-sdb-chroot bitmap-console-fonts bitmap-fixed-fonts bitmap-fonts-compat bitmap-lucida-typewriter-fonts bluez-hid2hci bnd-maven-plugin boost-jam boost-signals bouncycastle bpg-algeti-fonts bpg-chveulebrivi-fonts bpg-classic-fonts bpg-courier-fonts bpg-courier-s-fonts bpg-dedaena-block-fonts bpg-dejavu-sans-fonts bpg-elite-fonts bpg-excelsior-caps-fonts bpg-excelsior-condenced-fonts bpg-excelsior-fonts bpg-fonts-common bpg-glaho-fonts bpg-gorda-fonts bpg-ingiri-fonts bpg-irubaqidze-fonts bpg-mikhail-stephan-fonts bpg-mrgvlovani-caps-fonts bpg-mrgvlovani-fonts bpg-nateli-caps-fonts bpg-nateli-condenced-fonts bpg-nateli-fonts bpg-nino-medium-cond-fonts bpg-nino-medium-fonts bpg-sans-fonts bpg-sans-medium-fonts bpg-sans-modern-fonts bpg-sans-regular-fonts bpg-serif-fonts bpg-serif-modern-fonts bpg-ucnobi-fonts brlapi-java bsf-javadoc bsh bsh-javadoc bsh-manual buildnumber-maven-plugin byaccj byaccj-debuginfo byaccj-debugsource cal10n cal10n-javadoc cbi-plugins cdi-api-javadoc cdparanoia cdparanoia-devel cdparanoia-libs cdrdao celt051 celt051-devel cgdcbxd cglib-javadoc clutter-devel clutter-doc clutter-gst3-devel clutter-gtk-devel cmirror codehaus-parent codemodel cogl-devel cogl-doc compat-exiv2-026 compat-guile18 compat-guile18-devel compat-hwloc1 compat-libpthread-nonshared compat-libtiff3 compat-openssl10 compat-sap-c++-10 compat-sap-c++-11 compat-sap-c++-9 crash-ptdump-command ctags ctags-etags culmus-keteryg-fonts culmus-shofar-fonts custodia cyrus-imapd-vzic dbus-c++ dbus-c++-devel dbus-c++-glib dbxtool dejavu-fonts-common dhcp-libs directory-maven-plugin directory-maven-plugin-javadoc dirsplit dleyna-connector-dbus dleyna-core dleyna-renderer dleyna-server dnf-plugin-spacewalk dnssec-trigger dnssec-trigger-panel dotnet dotnet-apphost-pack-3.0 dotnet-apphost-pack-3.1 dotnet-apphost-pack-5.0 dotnet-build-reference-packages dotnet-host-fxr-2.1 dotnet-hostfxr-3.0 dotnet-hostfxr-3.1 dotnet-hostfxr-5.0 dotnet-runtime-2.1 dotnet-runtime-3.0 dotnet-runtime-3.1 dotnet-runtime-5.0 dotnet-sdk-2.1 dotnet-sdk-2.1.5xx dotnet-sdk-3.0 dotnet-sdk-3.1 dotnet-sdk-3.1-source-built-artifacts dotnet-sdk-5.0 dotnet-sdk-5.0-source-built-artifacts dotnet-targeting-pack-3.0 dotnet-targeting-pack-3.1 dotnet-targeting-pack-5.0 dotnet-templates-3.0 dotnet-templates-3.1 dotnet-templates-5.0 dotnet5.0-build-reference-packages dptfxtract drpm drpm-devel dump The dump package providing the dump utility has been removed. You can use the tar , dd , or bacula backup utility instead. dvd+rw-tools dyninst-static easymock-javadoc eclipse-ecf eclipse-ecf-core eclipse-ecf-runtime eclipse-emf eclipse-emf-core eclipse-emf-runtime eclipse-emf-xsd eclipse-equinox-osgi eclipse-jdt eclipse-license eclipse-p2-discovery eclipse-pde eclipse-platform eclipse-swt ed25519-java ee4j-parent elfutils-devel-static elfutils-libelf-devel-static elinks emacs-terminal emoji-picker enca enca-devel environment-modules-compat evemu evemu-libs evince-browser-plugin exec-maven-plugin exec-maven-plugin-javadoc farstream02 felix-gogo-command felix-gogo-runtime felix-gogo-shell felix-osgi-compendium felix-osgi-compendium-javadoc felix-osgi-core felix-osgi-core-javadoc felix-osgi-foundation felix-osgi-foundation-javadoc felix-parent felix-scr felix-utils-javadoc file-roller fipscheck fipscheck-devel fipscheck-lib fonts-tweak-tool forge-parent freeradius-mysql freeradius-perl freeradius-postgresql freeradius-sqlite freeradius-unixODBC frei0r-devel frei0r-plugins frei0r-plugins-opencv fuse-sshfs fusesource-pom future gamin gamin-devel gavl gcc-toolset-10 gcc-toolset-10-annobin gcc-toolset-10-binutils gcc-toolset-10-binutils-devel gcc-toolset-10-build gcc-toolset-10-dwz gcc-toolset-10-dyninst gcc-toolset-10-dyninst-devel gcc-toolset-10-elfutils gcc-toolset-10-elfutils-debuginfod-client gcc-toolset-10-elfutils-debuginfod-client-devel gcc-toolset-10-elfutils-devel gcc-toolset-10-elfutils-libelf gcc-toolset-10-elfutils-libelf-devel gcc-toolset-10-elfutils-libs gcc-toolset-10-gcc gcc-toolset-10-gcc-c++ gcc-toolset-10-gcc-gdb-plugin gcc-toolset-10-gcc-gfortran gcc-toolset-10-gcc-plugin-devel gcc-toolset-10-gdb gcc-toolset-10-gdb-doc gcc-toolset-10-gdb-gdbserver gcc-toolset-10-libasan-devel gcc-toolset-10-libatomic-devel gcc-toolset-10-libitm-devel gcc-toolset-10-liblsan-devel gcc-toolset-10-libquadmath-devel gcc-toolset-10-libstdc++-devel gcc-toolset-10-libstdc++-docs gcc-toolset-10-libtsan-devel gcc-toolset-10-libubsan-devel gcc-toolset-10-ltrace gcc-toolset-10-make gcc-toolset-10-make-devel gcc-toolset-10-perftools gcc-toolset-10-runtime gcc-toolset-10-strace gcc-toolset-10-systemtap gcc-toolset-10-systemtap-client gcc-toolset-10-systemtap-devel gcc-toolset-10-systemtap-initscript gcc-toolset-10-systemtap-runtime gcc-toolset-10-systemtap-sdt-devel gcc-toolset-10-systemtap-server gcc-toolset-10-toolchain gcc-toolset-10-valgrind gcc-toolset-10-valgrind-devel gcc-toolset-11 gcc-toolset-11-annobin-annocheck gcc-toolset-11-annobin-docs gcc-toolset-11-annobin-plugin-gcc gcc-toolset-11-binutils gcc-toolset-11-binutils-devel gcc-toolset-11-build gcc-toolset-11-dwz gcc-toolset-11-dyninst gcc-toolset-11-dyninst-devel gcc-toolset-11-elfutils gcc-toolset-11-elfutils-debuginfod-client gcc-toolset-11-elfutils-debuginfod-client-devel gcc-toolset-11-elfutils-devel gcc-toolset-11-elfutils-libelf gcc-toolset-11-elfutils-libelf-devel gcc-toolset-11-elfutils-libs gcc-toolset-11-gcc gcc-toolset-11-gcc-c++ gcc-toolset-11-gcc-gdb-plugin gcc-toolset-11-gcc-gfortran gcc-toolset-11-gcc-plugin-devel gcc-toolset-11-gdb gcc-toolset-11-gdb-doc gcc-toolset-11-gdb-gdbserver gcc-toolset-11-libasan-devel gcc-toolset-11-libatomic-devel gcc-toolset-11-libgccjit gcc-toolset-11-libgccjit-devel gcc-toolset-11-libgccjit-docs gcc-toolset-11-libitm-devel gcc-toolset-11-liblsan-devel gcc-toolset-11-libquadmath-devel gcc-toolset-11-libstdc++-devel gcc-toolset-11-libstdc++-docs gcc-toolset-11-libtsan-devel gcc-toolset-11-libubsan-devel gcc-toolset-11-ltrace gcc-toolset-11-make gcc-toolset-11-make-devel gcc-toolset-11-perftools gcc-toolset-11-runtime gcc-toolset-11-strace gcc-toolset-11-systemtap gcc-toolset-11-systemtap-client gcc-toolset-11-systemtap-devel gcc-toolset-11-systemtap-initscript gcc-toolset-11-systemtap-runtime gcc-toolset-11-systemtap-sdt-devel gcc-toolset-11-systemtap-server gcc-toolset-11-toolchain gcc-toolset-11-valgrind gcc-toolset-11-valgrind-devel gcc-toolset-12-annobin-annocheck gcc-toolset-12-annobin-docs gcc-toolset-12-annobin-plugin-gcc gcc-toolset-12-binutils-devel gcc-toolset-12-binutils-gold gcc-toolset-9 gcc-toolset-9-annobin gcc-toolset-9-binutils gcc-toolset-9-binutils-devel gcc-toolset-9-build gcc-toolset-9-dwz gcc-toolset-9-dyninst gcc-toolset-9-dyninst-devel gcc-toolset-9-dyninst-doc gcc-toolset-9-dyninst-static gcc-toolset-9-dyninst-testsuite gcc-toolset-9-elfutils gcc-toolset-9-elfutils-devel gcc-toolset-9-elfutils-libelf gcc-toolset-9-elfutils-libelf-devel gcc-toolset-9-elfutils-libs gcc-toolset-9-gcc gcc-toolset-9-gcc-c++ gcc-toolset-9-gcc-gdb-plugin gcc-toolset-9-gcc-gfortran gcc-toolset-9-gcc-plugin-devel gcc-toolset-9-gdb gcc-toolset-9-gdb-doc gcc-toolset-9-gdb-gdbserver gcc-toolset-9-libasan-devel gcc-toolset-9-libatomic-devel gcc-toolset-9-libitm-devel gcc-toolset-9-liblsan-devel gcc-toolset-9-libquadmath-devel gcc-toolset-9-libstdc++-devel gcc-toolset-9-libstdc++-docs gcc-toolset-9-libtsan-devel gcc-toolset-9-libubsan-devel gcc-toolset-9-ltrace gcc-toolset-9-make gcc-toolset-9-make-devel gcc-toolset-9-perftools gcc-toolset-9-runtime gcc-toolset-9-strace gcc-toolset-9-systemtap gcc-toolset-9-systemtap-client gcc-toolset-9-systemtap-devel gcc-toolset-9-systemtap-initscript gcc-toolset-9-systemtap-runtime gcc-toolset-9-systemtap-sdt-devel gcc-toolset-9-systemtap-server gcc-toolset-9-toolchain gcc-toolset-9-valgrind gcc-toolset-9-valgrind-devel GConf2 GConf2-devel gegl genwqe-tools genwqe-vpd genwqe-zlib genwqe-zlib-devel geoipupdate geronimo-annotation geronimo-annotation geronimo-annotation-javadoc geronimo-jms geronimo-jms-javadoc geronimo-jpa geronimo-jpa-javadoc geronimo-parent-poms gfbgraph gflags gflags-devel glassfish-annotation-api glassfish-annotation-api glassfish-annotation-api-javadoc glassfish-el glassfish-fastinfoset glassfish-jaxb-core glassfish-jaxb-txw2 glassfish-jsp glassfish-jsp-api glassfish-jsp-api glassfish-jsp-api-javadoc glassfish-legal glassfish-master-pom glassfish-servlet-api glassfish-servlet-api glassfish-servlet-api-javadoc glew-devel glib2-fam glog glog-devel gmock gmock-devel gnome-abrt gnome-boxes gnome-menus-devel gnome-online-miners gnome-shell-extension-dash-to-panel gnome-shell-extension-disable-screenshield gnome-shell-extension-horizontal-workspaces gnome-shell-extension-no-hot-corner gnome-shell-extension-window-grouper gnome-themes-standard gnu-free-fonts-common gnu-free-mono-fonts gnu-free-sans-fonts gnu-free-serif-fonts gnuplot gnuplot-common gnuplot-doc google-droid-kufi-fonts google-gson google-guice-javadoc google-noto-kufi-arabic-fonts google-noto-naskh-arabic-fonts google-noto-naskh-arabic-ui-fonts google-noto-nastaliq-urdu-fonts google-noto-sans-balinese-fonts google-noto-sans-bamum-fonts google-noto-sans-batak-fonts google-noto-sans-buginese-fonts google-noto-sans-buhid-fonts google-noto-sans-canadian-aboriginal-fonts google-noto-sans-cham-fonts google-noto-sans-cuneiform-fonts google-noto-sans-cypriot-fonts google-noto-sans-gothic-fonts google-noto-sans-gurmukhi-ui-fonts google-noto-sans-hanunoo-fonts google-noto-sans-inscriptional-pahlavi-fonts google-noto-sans-inscriptional-parthian-fonts google-noto-sans-javanese-fonts google-noto-sans-lepcha-fonts google-noto-sans-limbu-fonts google-noto-sans-linear-b-fonts google-noto-sans-lisu-fonts google-noto-sans-mandaic-fonts google-noto-sans-meetei-mayek-fonts google-noto-sans-mongolian-fonts google-noto-sans-myanmar-fonts google-noto-sans-myanmar-ui-fonts google-noto-sans-new-tai-lue-fonts google-noto-sans-ogham-fonts google-noto-sans-ol-chiki-fonts google-noto-sans-old-italic-fonts google-noto-sans-old-persian-fonts google-noto-sans-oriya-fonts google-noto-sans-oriya-ui-fonts google-noto-sans-phags-pa-fonts google-noto-sans-rejang-fonts google-noto-sans-runic-fonts google-noto-sans-samaritan-fonts google-noto-sans-saurashtra-fonts google-noto-sans-sundanese-fonts google-noto-sans-syloti-nagri-fonts google-noto-sans-syriac-eastern-fonts google-noto-sans-syriac-estrangela-fonts google-noto-sans-syriac-western-fonts google-noto-sans-tagalog-fonts google-noto-sans-tagbanwa-fonts google-noto-sans-tai-le-fonts google-noto-sans-tai-tham-fonts google-noto-sans-tai-viet-fonts google-noto-sans-tibetan-fonts google-noto-sans-tifinagh-fonts google-noto-sans-ui-fonts google-noto-sans-yi-fonts google-noto-serif-bengali-fonts google-noto-serif-devanagari-fonts google-noto-serif-gujarati-fonts google-noto-serif-kannada-fonts google-noto-serif-malayalam-fonts google-noto-serif-tamil-fonts google-noto-serif-telugu-fonts gphoto2 gsl-devel gssntlmssp gtest gtest-devel gtkmm24 gtkmm24-devel gtkmm24-docs gtksourceview3 gtksourceview3-devel gtkspell gtkspell-devel guava20-javadoc guava20-testlib guice-assistedinject guice-bom guice-extensions guice-grapher guice-jmx guice-jndi guice-multibindings guice-parent guice-servlet guice-testlib guice-throwingproviders guile guile-devel gutenprint-libs-ui gutenprint-plugin gvfs-afc gvfs-afp gvfs-archive hamcrest-core hamcrest-core hamcrest-demo hamcrest-javadoc hawtjni hawtjni hawtjni hawtjni-javadoc hawtjni-runtime hawtjni-runtime HdrHistogram HdrHistogram-javadoc highlight-gui hplip-gui hspell httpcomponents-client-cache httpcomponents-client-javadoc httpcomponents-core-javadoc httpcomponents-project hwloc-plugins hyphen-fo hyphen-grc hyphen-hsb hyphen-ia hyphen-is hyphen-ku hyphen-mi hyphen-mn hyphen-sa hyphen-tk ibus-sayura ibus-table-devel ibus-table-tests ibus-typing-booster-tests icedax icu4j idm-console-framework ilmbase-devel ima-evm-utils0 imake intel-gpu-tools ipython isl isl-devel isorelax isorelax-javadoc istack-commons-runtime istack-commons-tools ivy-local iwl3945-firmware iwl4965-firmware iwl6000-firmware jacoco jaf jaf-javadoc jakarta-commons-httpclient-demo jakarta-commons-httpclient-javadoc jakarta-commons-httpclient-manual jakarta-oro-javadoc janino jansi-javadoc jansi-native jansi-native jansi-native-javadoc jarjar java-1.8.0-ibm java-1.8.0-ibm-demo java-1.8.0-ibm-devel java-1.8.0-ibm-headless java-1.8.0-ibm-jdbc java-1.8.0-ibm-plugin java-1.8.0-ibm-src java-1.8.0-ibm-webstart java-1.8.0-openjdk-accessibility java-1.8.0-openjdk-accessibility-fastdebug java-1.8.0-openjdk-accessibility-slowdebug java-atk-wrapper java_cup java_cup-javadoc java_cup-manual javacc javacc-demo javacc-javadoc javacc-manual javacc-maven-plugin javacc-maven-plugin-javadoc javaewah javamail-javadoc javapackages-local javaparser javapoet javassist javassist javassist-javadoc javassist-javadoc jaxen jaxen-demo jaxen-javadoc jboss-annotations-1.2-api jboss-interceptors-1.2-api jboss-interceptors-1.2-api jboss-interceptors-1.2-api-javadoc jboss-logmanager jboss-parent jctools jdepend-demo jdepend-javadoc jdependency jdependency-javadoc jdom jdom-demo jdom-javadoc jdom2 jdom2-javadoc jetty jetty-continuation jetty-http jetty-io jetty-security jetty-server jetty-servlet jetty-util jffi jflex jflex-javadoc jgit jline jline jline-javadoc jmc jmc-core-javadoc jnr-netdb jolokia-jvm-agent js-uglify jsch-javadoc json_simple jsoup-javadoc jsr-305-javadoc jss-javadoc jtidy jul-to-slf4j junit-javadoc junit-manual jvnet-parent jzlib-demo jzlib-javadoc khmeros-fonts-common kmod-redhat-oracleasm kurdit-unikurd-web-fonts kyotocabinet-libs ldapjdk-javadoc lensfun lensfun-devel lftp-scripts libaec libaec-devel libappindicator-gtk3 libappindicator-gtk3-devel libasan6 libatomic-static libavc1394 libblocksruntime libcacard libcacard-devel libcgroup libcgroup-pam libcgroup-tools libchamplain libchamplain-devel libchamplain-gtk libcroco libcroco-devel libcxl libcxl-devel libdap libdap-devel libdazzle-devel libdbusmenu libdbusmenu-devel libdbusmenu-doc libdbusmenu-gtk3 libdbusmenu-gtk3-devel libdnet libdnet-devel libdv libdv-devel libdwarf The libdwarf package is not included in RHEL 9. The elfutils package provides similar functionality. libdwarf-devel libdwarf-static libdwarf-tools libeasyfc libeasyfc-gobject libepubgen-devel libertas-sd8686-firmware libertas-usb8388-firmware libertas-usb8388-olpc-firmware libev-libevent-devel libev-source libgdither libGLEW libgovirt libguestfs-benchmarking libguestfs-gfs2 libguestfs-java libguestfs-java-devel libguestfs-javadoc libguestfs-tools libguestfs-tools-c libhugetlbfs libhugetlbfs-devel libhugetlbfs-utils libicu-doc libIDL libIDL-devel libidn The libidn package (which implements the IDNA 2003 standard) is not included in RHEL 9. You can migrate applications to libidn2 , which implements the IDNA 2008 standard and has a different feature set to libidn . libidn-devel libiec61883 libiec61883-devel libindicator-gtk3 libindicator-gtk3-devel libiscsi-devel libkkc libkkc-common libkkc-data liblogging libmalaga libmcpp libmetalink libmodulemd1 The libmodulemd1 package has been removed and is replaced by the libmodulemd package. libmongocrypt libmpcdec libmpcdec-devel libmtp-devel libmusicbrainz5 libmusicbrainz5-devel libnice libnice-devel libnice-gstreamer1 liboauth liboauth-devel libocxl-docs libpfm-static libpng12 libpsm2-compat libpurple libpurple-devel libraw1394 libraw1394-devel libreport-plugin-mailx libreport-plugin-rhtsupport libreport-plugin-ureport libreport-rhel libreport-rhel-bugzilla librpmem The librpmem package has been removed. Use the librpma package instead. librpmem-debug librpmem-devel libsass libsass-devel libselinux-python libslirp-devel libsqlite3x libssh2-docs libtar libtpms-devel libunwind libusal libvarlink libverto-libevent libvirt-admin libvirt-bash-completion libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-iscsi-direct libvirt-gconfig libvirt-gobject libvirt-wireshark libvmem libvmem-debug libvmem-devel libvmmalloc libvmmalloc-debug libvmmalloc-devel libvncserver libwmf libwmf-devel libwmf-lite libXNVCtrl libXNVCtrl-devel libXvMC libXvMC-devel libXxf86misc libXxf86misc-devel libyami log4j-over-slf4j log4j12 log4j12 log4j12-javadoc log4j12-javadoc lohit-malayalam-fonts lohit-nepali-fonts lucene lucene-analysis lucene-analyzers-smartcn lucene-queries lucene-queryparser lucene-sandbox lz4-java lz4-java-javadoc mailman make-devel malaga malaga-suomi-voikko marisa marisa-devel maven-antrun-plugin maven-antrun-plugin-javadoc maven-archiver-javadoc maven-artifact maven-artifact-manager maven-artifact-resolver-javadoc maven-artifact-transfer-javadoc maven-assembly-plugin maven-assembly-plugin-javadoc maven-cal10n-plugin maven-clean-plugin maven-clean-plugin-javadoc maven-common-artifact-filters-javadoc maven-compiler-plugin-javadoc maven-dependency-analyzer maven-dependency-analyzer-javadoc maven-dependency-plugin maven-dependency-plugin-javadoc maven-dependency-tree-javadoc maven-doxia maven-doxia-core maven-doxia-javadoc maven-doxia-logging-api maven-doxia-module-apt maven-doxia-module-confluence maven-doxia-module-docbook-simple maven-doxia-module-fml maven-doxia-module-latex maven-doxia-module-rtf maven-doxia-module-twiki maven-doxia-module-xdoc maven-doxia-module-xhtml maven-doxia-modules maven-doxia-sink-api maven-doxia-sitetools maven-doxia-sitetools-javadoc maven-doxia-test-docs maven-doxia-tests maven-enforcer-javadoc maven-failsafe-plugin maven-file-management-javadoc maven-filtering-javadoc maven-hawtjni-plugin maven-install-plugin maven-install-plugin-javadoc maven-invoker maven-invoker-javadoc maven-invoker-plugin maven-invoker-plugin-javadoc maven-jar-plugin-javadoc maven-javadoc maven-local maven-model maven-monitor maven-parent maven-plugin-build-helper-javadoc maven-plugin-bundle-javadoc maven-plugin-descriptor maven-plugin-registry maven-plugin-testing-javadoc maven-plugin-testing-tools maven-plugin-tools-ant maven-plugin-tools-beanshell maven-plugin-tools-javadoc maven-plugin-tools-javadocs maven-plugin-tools-model maven-plugins-pom maven-profile maven-project maven-remote-resources-plugin-javadoc maven-reporting-api maven-reporting-api-javadoc maven-reporting-impl maven-reporting-impl-javadoc maven-resolver-api maven-resolver-api maven-resolver-connector-basic maven-resolver-connector-basic maven-resolver-impl maven-resolver-impl maven-resolver-javadoc maven-resolver-spi maven-resolver-spi maven-resolver-test-util maven-resolver-transport-classpath maven-resolver-transport-file maven-resolver-transport-http maven-resolver-transport-wagon maven-resolver-transport-wagon maven-resolver-util maven-resolver-util maven-resources-plugin-javadoc maven-scm maven-script maven-script-ant maven-script-beanshell maven-script-interpreter maven-script-interpreter-javadoc maven-settings maven-shade-plugin maven-shade-plugin-javadoc maven-shared maven-shared-incremental-javadoc maven-shared-io-javadoc maven-shared-utils-javadoc maven-source-plugin-javadoc maven-surefire-javadoc maven-surefire-report-parser maven-surefire-report-plugin maven-test-tools maven-toolchain maven-verifier-javadoc maven-wagon-file maven-wagon-file maven-wagon-ftp maven-wagon-http maven-wagon-http maven-wagon-http-lightweight maven-wagon-http-shared maven-wagon-http-shared maven-wagon-javadoc maven-wagon-provider-api maven-wagon-provider-api maven-wagon-providers maven2 maven2 maven2-javadoc meanwhile mercurial mercurial-hgk mesa-libGLES-devel mesa-vdpau-drivers metis metis-devel mingw32-bzip2 mingw32-bzip2-static mingw32-cairo mingw32-expat mingw32-fontconfig mingw32-freetype mingw32-freetype-static mingw32-gstreamer1 mingw32-harfbuzz mingw32-harfbuzz-static mingw32-icu mingw32-libjpeg-turbo mingw32-libjpeg-turbo-static mingw32-libpng mingw32-libpng-static mingw32-libtiff mingw32-libtiff-static mingw32-openssl mingw32-readline mingw32-spice-vdagent mingw32-sqlite mingw32-sqlite-static mingw64-adwaita-icon-theme mingw64-bzip2 mingw64-bzip2-static mingw64-cairo mingw64-expat mingw64-fontconfig mingw64-freetype mingw64-freetype-static mingw64-gstreamer1 mingw64-harfbuzz mingw64-harfbuzz-static mingw64-icu mingw64-libjpeg-turbo mingw64-libjpeg-turbo-static mingw64-libpng mingw64-libpng-static mingw64-libtiff mingw64-libtiff-static mingw64-nettle mingw64-openssl mingw64-readline mingw64-spice-vdagent mingw64-sqlite mingw64-sqlite-static mockito-javadoc modello modello-javadoc mojo-parent mongo-c-driver motif-static mousetweaks mozjs52 mozjs52-devel mozjs60 mozjs60-devel mozvoikko msv-javadoc msv-manual munge-maven-plugin munge-maven-plugin-javadoc mythes-lb mythes-mi mythes-ne nafees-web-naskh-fonts nbd-3.21-2.el9 nbdkit-gzip-plugin nbdkit-plugin-python-common nbdkit-plugin-vddk nbdkit-tar-plugin ncompress The ncompress package has been removed. You can use a different compressing tool, such as gzip , zlib , or zstd . ncurses-compat-libs netcf netcf-devel netcf-libs network-scripts network-scripts-ppp nkf nodejs-devel nodejs-packaging nss-pam-ldapd The nss-pam-ldapd package has been removed. You can use SSSD instead. nss_nis objectweb-asm-javadoc objectweb-pom objenesis-javadoc ocaml-bisect-ppx ocaml-camlp4 ocaml-camlp4-devel ocaml-lwt-5.3.0-7.el9 ocaml-mmap-1.1.0-16.el9 ocaml-ocplib-endian-1.1-5.el9 ocaml-ounit-2.2.2-15.el9 ocaml-result-1.5-7.el9 ocaml-seq-0.2.2-4.el9 openblas-Rblas opencryptoki-tpmtok opencv-contrib opencv-core opencv-devel OpenEXR-devel openhpi openhpi-libs OpenIPMI-perl openssh-cavs openssh-ldap openssl-ibmpkcs11 os-maven-plugin os-maven-plugin-javadoc osgi-annotation-javadoc osgi-compendium-javadoc osgi-core-javadoc overpass-mono-fonts owasp-java-encoder-javadoc pakchois pandoc pandoc-common paps-libs paranamer paratype-pt-sans-caption-fonts parfait parfait-examples parfait-javadoc pcp-parfait-agent pcsc-lite-doc perl-B-Debug perl-B-Lint perl-Class-Factory-Util perl-Class-ISA perl-DateTime-Format-HTTP perl-DateTime-Format-Mail perl-File-CheckTree perl-homedir perl-libxml-perl perl-Locale-Codes perl-Mozilla-LDAP perl-NKF perl-Object-HashBase-tools perl-Package-DeprecationManager perl-Pod-LaTeX perl-Pod-Plainer perl-prefork perl-String-CRC32 perl-SUPER perl-Sys-Virt perl-tests perl-YAML-Syck phodav-2.5-4.el9 php-recode php-xmlrpc pidgin pidgin-devel pidgin-sipe pinentry-emacs pinentry-gtk pipewire0.2-devel pipewire0.2-libs platform-python-coverage plexus-ant-factory plexus-ant-factory-javadoc plexus-archiver-javadoc plexus-bsh-factory plexus-bsh-factory-javadoc plexus-build-api-javadoc plexus-cipher-javadoc plexus-classworlds-javadoc plexus-cli plexus-cli-javadoc plexus-compiler-extras plexus-compiler-javadoc plexus-compiler-pom plexus-component-api plexus-component-api-javadoc plexus-component-factories-pom plexus-components-pom plexus-containers-component-javadoc plexus-containers-component-metadata plexus-containers-container-default plexus-containers-javadoc plexus-i18n plexus-i18n-javadoc plexus-interactivity plexus-interactivity-api plexus-interactivity-javadoc plexus-interactivity-jline plexus-interpolation-javadoc plexus-io-javadoc plexus-languages-javadoc plexus-pom plexus-resources-javadoc plexus-sec-dispatcher-javadoc plexus-utils-javadoc plexus-velocity plexus-velocity-javadoc plymouth-plugin-throbgress pmreorder postgresql-test-rpm-macros powermock powermock-api-easymock powermock-api-mockito powermock-api-support powermock-common powermock-core powermock-javadoc powermock-junit4 powermock-reflect powermock-testng prometheus-jmx-exporter prometheus-jmx-exporter-openjdk11 ptscotch-mpich ptscotch-mpich-devel ptscotch-mpich-devel-parmetis ptscotch-openmpi ptscotch-openmpi-devel purple-sipe pygobject2-doc pygtk2 pygtk2-codegen pygtk2-devel pygtk2-doc python-nose-docs python-nss-doc python-podman-api python-psycopg2-doc python-pymongo-doc python-redis python-schedutils python-slip python-sphinx-locale python-sqlalchemy-doc python-varlink python-virtualenv-doc python2-backports python2-backports-ssl_match_hostname python2-bson python2-coverage python2-docs python2-docs-info python2-funcsigs python2-gluster python2-ipaddress python2-iso8601 python2-mock python2-nose python2-numpy-doc python2-psycopg2-debug python2-psycopg2-tests python2-pymongo python2-pymongo-gridfs python2-pytest-mock python2-sqlalchemy python2-tools python2-virtualenv python3-avahi python3-bson python3-click python3-coverage python3-cpio python3-custodia python3-docs python3-evdev python3-flask python3-gevent python3-html5lib python3-hypothesis python3-iso8601 python3-itsdangerous python3-javapackages python3-jwt python3-mock python3-networkx-core python3-nose python3-nss python3-openipmi The python3-openipmi package is no longer provided. python3-pyghmi has been introduced in order to provide a simpler Python API for the IPMI protocol, but the API is not compatible with the one of python3-openipmi . python3-pexpect python3-pillow python3-pillow-devel python3-pillow-doc python3-pillow-tk python3-ptyprocess python3-pydbus python3-pymongo python3-pymongo-gridfs python3-pyOpenSSL python3-reportlab python3-schedutils python3-scons python3-semantic_version python3-slip python3-slip-dbus python3-sqlalchemy The python3-sqlalchemy package has been removed. Customers must use Python connectors for MySQL or PostgreSQL directly. Python 3 database connector for MySQL is available in the python3-PyMySQL package. Python 3 database connector for PostgreSQL is available in the python3-psycopg2 package. python3-sure python3-syspurpose python3-unittest2 python3-virtualenv Use the venv module in Python 3 instead. python3-webencodings python3-werkzeug python3-whoosh python38-asn1crypto python38-atomicwrites python38-more-itertools python38-numpy-doc python38-psycopg2-doc python38-psycopg2-tests python39-more-itertools python39-numpy-doc python39-psycopg2-doc python39-psycopg2-tests python39-pybind11 python39-pybind11-devel qdox-javadoc qemu-kvm-block-gluster qemu-kvm-block-iscsi qemu-kvm-block-ssh qemu-kvm-device-display-virtio-gpu-gl qemu-kvm-device-display-virtio-gpu-pci-gl qemu-kvm-device-display-virtio-vga-gl qemu-kvm-hw-usbredir qemu-kvm-tests qemu-kvm-ui-spice qpdf qpdf-doc qperf The qperf package has been removed. You can use the perftest or iperf3 package instead. qpid-proton qrencode qrencode-devel qrencode-libs qt5-qtcanvas3d qt5-qtcanvas3d-examples rarian rarian-compat re2c recode redhat-lsb redhat-lsb-core redhat-lsb-cxx redhat-lsb-desktop redhat-lsb-languages redhat-lsb-printing redhat-lsb-submod-multimedia redhat-lsb-submod-security redhat-menus redhat-support-lib-python redhat-support-tool reflections regexp-javadoc relaxngDatatype resteasy-javadoc rhsm-gtk rpm-plugin-prioreset rpmemd rubygem-abrt rubygem-abrt-doc rubygem-bson rubygem-bson-doc rubygem-bundler-doc rubygem-mongo rubygem-mongo-doc rubygem-net-telnet rubygem-xmlrpc s390utils-cmsfs The s390utils-cmsfs package has been removed and is replaced by the s390utils-cmsfs-fuse package. samyak-devanagari-fonts samyak-fonts-common samyak-gujarati-fonts samyak-malayalam-fonts samyak-odia-fonts samyak-tamil-fonts sane-frontends The sane-frontends package has been removed. Its functionality is covered by the scanimage or xsane package. sanlk-reset sat4j scala scotch scotch-devel SDL_sound selinux-policy-minimum shim-ia32 shrinkwrap sil-padauk-book-fonts sisu-inject sisu-inject sisu-javadoc sisu-mojos sisu-mojos-javadoc sisu-plexus sisu-plexus skkdic slf4j-ext slf4j-javadoc slf4j-jcl slf4j-log4j12 slf4j-manual slf4j-sources SLOF smc-anjalioldlipi-fonts smc-dyuthi-fonts smc-fonts-common smc-kalyani-fonts smc-raghumalayalam-fonts smc-suruma-fonts softhsm-devel sonatype-oss-parent sonatype-plugins-parent sos-collector sparsehash-devel spax The spax package has been removed. You can use the tar and cpio commands instead. spec-version-maven-plugin spec-version-maven-plugin-javadoc spice-0.14.3-4.el9 spice-client-win-x64 spice-client-win-x86 spice-glib spice-glib-devel spice-gtk spice-gtk-tools spice-gtk3 spice-gtk3-devel spice-gtk3-vala spice-parent spice-qxl-wddm-dod spice-qxl-xddm spice-server spice-server-devel spice-streaming-agent spice-vdagent-win-x64 spice-vdagent-win-x86 star stax-ex stax2-api stringtemplate stringtemplate4 subscription-manager-initial-setup-addon subscription-manager-migration subscription-manager-migration-data subversion-javahl SuperLU SuperLU-devel swtpm-devel swtpm-tools-pkcs11 system-storage-manager systemd-tests tcl-brlapi testng testng-javadoc thai-scalable-laksaman-fonts tibetan-machine-uni-fonts timedatex The timedatex package has been removed. The systemd package provides the systemd-timedated service, which replaces timedatex . torque torque-devel torque-libs tpm-quote-tools tpm-tools tpm-tools-pkcs11 treelayout trousers trousers-devel trousers-lib tuned-profiles-compat tuned-profiles-nfv-host-bin tuned-utils-systemtap tycho uglify-js unbound-devel univocity-output-tester usbguard-notifier utf8cpp uthash uthash-devel velocity-demo velocity-javadoc velocity-manual vinagre vino virt-dib virt-p2v-maker vm-dump-metrics-devel voikko-tools vorbis-tools weld-parent woodstox-core wqy-microhei-fonts wqy-unibit-fonts xalan-j2-demo xalan-j2-javadoc xalan-j2-manual xalan-j2-xsltc xbean-javadoc xdelta xerces-j2-demo xerces-j2-javadoc xinetd xml-commons-apis-javadoc xml-commons-apis-manual xml-commons-resolver-javadoc xmlgraphics-commons xmlstreambuffer xmlunit-javadoc xmvn-api xmvn-bisect xmvn-connector-aether xmvn-connector-ivy xmvn-install xmvn-javadoc xmvn-parent-pom xmvn-resolve xmvn-subst xmvn-tools-pom xorg-sgml-doctools xorg-x11-apps xorg-x11-docs xorg-x11-drv-ati xorg-x11-drv-intel xorg-x11-drv-nouveau xorg-x11-drv-qxl xorg-x11-drv-vesa xorg-x11-server-Xspice xorg-x11-xkb-utils-devel xpp3 xsane-gimp xsom xz-java-javadoc yajl-devel yp-tools ypbind ypserv yum-rhn-plugin zsh-html A.5. Packages with removed support Certain packages in RHEL 9 are distributed through the CodeReady Linux Builder repository, which contains unsupported packages for use by developers. The following packages are distributed in a supported repository in RHEL 8 and in the CodeReady Linux Builder repository RHEL 9: Note This list covers only packages that are supported in RHEL 8 but not in RHEL 9. Package RHEL 8 repository apache-commons-collections rhel8-AppStream apache-commons-compress rhel8-AppStream aspell rhel8-AppStream bind-devel rhel8-AppStream createrepo_c-devel rhel8-AppStream fstrm-devel rhel8-AppStream gdbm rhel8-BaseOS gdbm-devel rhel8-BaseOS geoclue2-demos rhel8-AppStream gobject-introspection-devel rhel8-AppStream gtkspell3 rhel8-AppStream hivex-devel rhel8-AppStream kernel-cross-headers rhel8-BaseOS ksc rhel8-BaseOS libatomic_ops rhel8-AppStream libestr-devel rhel8-AppStream libfdisk-devel rhel8-BaseOS libguestfs-devel rhel8-AppStream libguestfs-gobject rhel8-AppStream libguestfs-gobject-devel rhel8-AppStream libguestfs-man-pages-ja rhel8-AppStream libguestfs-man-pages-uk rhel8-AppStream libica-devel rhel8-BaseOS libiscsi-devel rhel8-AppStream libjose-devel rhel8-AppStream libldb-devel rhel8-BaseOS libluksmeta-devel rhel8-AppStream libnbd-devel rhel8-AppStream libslirp-devel rhel8-AppStream libtalloc-devel rhel8-BaseOS libtdb-devel rhel8-BaseOS libtevent-devel rhel8-BaseOS libvirt-devel rhel8-AppStream libvirt-docs rhel8-AppStream libvirt-lock-sanlock rhel8-AppStream libwinpr-devel rhel8-AppStream libzip-devel rhel8-AppStream lua-guestfs rhel8-AppStream mariadb-devel rhel8-AppStream mariadb-embedded-devel rhel8-AppStream mariadb-test rhel8-AppStream multilib-rpm-config rhel8-AppStream mysql-devel rhel8-AppStream mysql-libs rhel8-AppStream mysql-test rhel8-AppStream nbdkit-devel rhel8-AppStream nbdkit-example-plugins rhel8-AppStream nginx-mod-devel rhel8-AppStream nss_db rhel8-BaseOS openblas-threads rhel8-AppStream perl-IO-String rhel8-AppStream perl-Module-Pluggable rhel8-AppStream perl-Module-Runtime rhel8-AppStream perl-Parse-Yapp rhel8-BaseOS postgresql-server-devel rhel8-AppStream postgresql-test rhel8-AppStream postgresql-upgrade-devel rhel8-AppStream protobuf-c-compiler rhel8-AppStream protobuf-c-devel rhel8-AppStream protobuf-compiler rhel8-AppStream python3-gobject-base rhel8-AppStream python3-hivex rhel8-AppStream python3-ipatests rhel8-AppStream python3-libguestfs rhel8-AppStream qclib-devel rhel8-BaseOS ruby-hivex rhel8-AppStream ruby-libguestfs rhel8-AppStream samba-pidl rhel8-BaseOS samba-test rhel8-BaseOS samba-test-libs rhel8-BaseOS sendmail-milter rhel8-AppStream spice-protocol rhel8-BaseOS supermin-devel rhel8-AppStream swig rhel8-AppStream swig-doc rhel8-AppStream swig-gdb rhel8-AppStream turbojpeg rhel8-AppStream unixODBC-devel rhel8-AppStream usbredir-devel rhel8-AppStream velocity rhel8-AppStream
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_changes-to-packages_considerations-in-adopting-rhel-9
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/making-open-source-more-inclusive
Chapter 2. CatalogSource [operators.coreos.com/v1alpha1]
Chapter 2. CatalogSource [operators.coreos.com/v1alpha1] Description CatalogSource is a repository of CSVs, CRDs, and operator packages. Type object Required metadata spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Required sourceType Property Type Description address string Address is a host that OLM can use to connect to a pre-existing registry. Format: <registry-host or ip>:<port> Only used when SourceType = SourceTypeGrpc. Ignored when the Image field is set. configMap string ConfigMap is the name of the ConfigMap to be used to back a configmap-server registry. Only used when SourceType = SourceTypeConfigmap or SourceTypeInternal. description string displayName string Metadata grpcPodConfig object GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. icon object image string Image is an operator-registry container image to instantiate a registry-server with. Only used when SourceType = SourceTypeGrpc. If present, the address field is ignored. priority integer Priority field assigns a weight to the catalog source to prioritize them so that it can be consumed by the dependency resolver. Usage: Higher weight indicates that this catalog source is preferred over lower weighted catalog sources during dependency resolution. The range of the priority value can go from positive to negative in the range of int32. The default value to a catalog source with unassigned priority would be 0. The catalog source with the same priority values will be ranked lexicographically based on its name. publisher string secrets array (string) Secrets represent set of secrets that can be used to access the contents of the catalog. It is best to keep this list small, since each will need to be tried for every catalog entry. sourceType string SourceType is the type of source updateStrategy object UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type 2.1.2. .spec.grpcPodConfig Description GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. Type object Property Type Description nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. priorityClassName string If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. securityContextConfig string SecurityContextConfig can be one of legacy or restricted . The CatalogSource's pod is either injected with the right pod.spec.securityContext and pod.spec.container[*].securityContext values to allow the pod to run in Pod Security Admission (PSA) restricted mode, or doesn't set these values at all, in which case the pod can only be run in PSA baseline or privileged namespaces. Currently if the SecurityContextConfig is unspecified, the default value of legacy is used. Specifying a value other than legacy or restricted result in a validation error. When using older catalog images, which could not be run in restricted mode, the SecurityContextConfig should be set to legacy . In a future version will the default will be set to restricted , catalog maintainers should rebuild their catalogs with a version of opm that supports running catalogSource pods in restricted mode to prepare for these changes. More information about PSA can be found here: https://kubernetes.io/docs/concepts/security/pod-security-admission/' tolerations array Tolerations are the catalog source's pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 2.1.3. .spec.grpcPodConfig.tolerations Description Tolerations are the catalog source's pod's tolerations. Type array 2.1.4. .spec.grpcPodConfig.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.5. .spec.icon Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 2.1.6. .spec.updateStrategy Description UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type Type object Property Type Description registryPoll object 2.1.7. .spec.updateStrategy.registryPoll Description Type object Property Type Description interval string Interval is used to determine the time interval between checks of the latest catalog source version. The catalog operator polls to see if a new version of the catalog source is available. If available, the latest image is pulled and gRPC traffic is directed to the latest catalog source. 2.1.8. .status Description Type object Property Type Description conditions array Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } configMapReference object connectionState object latestImageRegistryPoll string The last time the CatalogSource image registry has been polled to ensure the image is up-to-date message string A human readable message indicating details about why the CatalogSource is in this condition. reason string Reason is the reason the CatalogSource was transitioned to its current state. registryService object 2.1.9. .status.conditions Description Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. Type array 2.1.10. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.1.11. .status.configMapReference Description Type object Required name namespace Property Type Description lastUpdateTime string name string namespace string resourceVersion string uid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 2.1.12. .status.connectionState Description Type object Required lastObservedState Property Type Description address string lastConnect string lastObservedState string 2.1.13. .status.registryService Description Type object Property Type Description createdAt string port string protocol string serviceName string serviceNamespace string 2.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/catalogsources GET : list objects of kind CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources DELETE : delete collection of CatalogSource GET : list objects of kind CatalogSource POST : create a CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} DELETE : delete a CatalogSource GET : read the specified CatalogSource PATCH : partially update the specified CatalogSource PUT : replace the specified CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status GET : read status of the specified CatalogSource PATCH : partially update status of the specified CatalogSource PUT : replace status of the specified CatalogSource 2.2.1. /apis/operators.coreos.com/v1alpha1/catalogsources Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind CatalogSource Table 2.2. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty 2.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources Table 2.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CatalogSource Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CatalogSource Table 2.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty HTTP method POST Description create a CatalogSource Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. Body parameters Parameter Type Description body CatalogSource schema Table 2.11. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 202 - Accepted CatalogSource schema 401 - Unauthorized Empty 2.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the CatalogSource namespace string object name and auth scope, such as for teams and projects Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CatalogSource Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CatalogSource Table 2.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.18. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CatalogSource Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Patch schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CatalogSource Table 2.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.23. Body parameters Parameter Type Description body CatalogSource schema Table 2.24. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty 2.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status Table 2.25. Global path parameters Parameter Type Description name string name of the CatalogSource namespace string object name and auth scope, such as for teams and projects Table 2.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CatalogSource Table 2.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.28. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CatalogSource Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body Patch schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CatalogSource Table 2.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.33. Body parameters Parameter Type Description body CatalogSource schema Table 2.34. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operatorhub_apis/catalogsource-operators-coreos-com-v1alpha1
Part III. Managing and monitoring business processes in Business Central
Part III. Managing and monitoring business processes in Business Central As a process administrator, you can use Business Central in Red Hat Process Automation Manager to manage and monitor process instances and tasks running on a number of projects. From Business Central you can start a new process instance, verify the state of all process instances, and abort processes. You can view the list of jobs and tasks associated with your processes, as well as understand and communicate any process errors. Prerequisites Red Hat JBoss Enterprise Application Platform 7.4 is installed. For more information, see Red Hat JBoss Enterprise Application Platform 7.4 Installation Guide . Red Hat Process Automation Manager is installed. For more information, see Planning a Red Hat Process Automation Manager installation . Red Hat Process Automation Manager is running and you can log in to Business Central with the process-admin role. For more information, see Planning a Red Hat Process Automation Manager installation .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/assembly-managing-and-monitoring-business-processes
Chapter 6. Automating Client Registration with the CLI
Chapter 6. Automating Client Registration with the CLI The Client Registration CLI is a command-line interface (CLI) tool for application developers to configure new clients in a self-service manner when integrating with Red Hat Single Sign-On. It is specifically designed to interact with Red Hat Single Sign-On Client Registration REST endpoints. It is necessary to create or obtain a client configuration for any application to be able to use Red Hat Single Sign-On. You usually configure a new client for each new application hosted on a unique host name. When an application interacts with Red Hat Single Sign-On, the application identifies itself with a client ID so Red Hat Single Sign-On can provide a login page, single sign-on (SSO) session management, and other services. You can configure application clients from a command line with the Client Registration CLI, and you can use it in shell scripts. To allow a particular user to use Client Registration CLI the Red Hat Single Sign-On administrator typically uses the Admin Console to configure a new user with proper roles or to configure a new client and client secret to grant access to the Client Registration REST API. 6.1. Configuring a new regular user for use with Client Registration CLI Procedure Log in to the Admin Console (for example, http://localhost:8080/auth/admin ) as admin . Select a realm to administer. If you want to use an existing user, select that user to edit; otherwise, create a new user. Select Role Mappings > Client Roles > realm-management . If you are in the master realm, select NAME-realm , where NAME is the name of the target realm. You can grant access to any other realm to users in the master realm. Select Available Roles > manage-client to grant a full set of client management permissions. Another option is to choose view-clients for read-only or create-client to create new clients. Note These permissions grant the user the capability to perform operations without the use of Initial Access Token or Registration Access Token . It is possible to not assign any realm-management roles to a user. In that case, a user can still log in with the Client Registration CLI but cannot use it without an Initial Access Token. Trying to perform any operations without a token results in a 403 Forbidden error. The Administrator can issue Initial Access Tokens from the Admin Console through the Realm Settings > Client Registration > Initial Access Token menu. 6.2. Configuring a client for use with the Client Registration CLI By default, the server recognizes the Client Registration CLI as the admin-cli client, which is configured automatically for every new realm. No additional client configuration is necessary when logging in with a user name. Procedure Create a client (for example, reg-cli ) if you want to use a separate client configuration for the Client Registration CLI. Toggle the Standard Flow Enabled setting it to Off . Strengthen the security by configuring the client Access Type as Confidential and selecting Credentials > ClientId and Secret . Note You can configure either Client Id and Secret or Signed JWT under the Credentials tab . Enable service accounts if you want to use a service account associated with the client by selecting a client to edit in the Clients section of the Admin Console . Under Settings , change the Access Type to Confidential , toggle the Service Accounts Enabled setting to On , and click Save . Click Service Account Roles and select desired roles to configure the access for the service account. For the details on what roles to select, see Section 6.1, "Configuring a new regular user for use with Client Registration CLI" . Toggle the Direct Access Grants Enabled setting it to On if you want to use a regular user account instead of a service account. If the client is configured as Confidential , provide the configured secret when running kcreg config credentials by using the --secret option. Specify which clientId to use (for example, --client reg-cli ) when running kcreg config credentials . With the service account enabled, you can omit specifying the user when running kcreg config credentials and only provide the client secret or keystore information. 6.3. Installing the Client Registration CLI The Client Registration CLI is packaged inside the Red Hat Single Sign-On Server distribution. You can find execution scripts inside the bin directory. The Linux script is called kcreg.sh , and the Windows script is called kcreg.bat . Add the Red Hat Single Sign-On server directory to your PATH when setting up the client for use from any location on the file system. For example, on: Linux: Windows: KEYCLOAK_HOME refers to a directory where the Red Hat Single Sign-On Server distribution was unpacked. 6.4. Using the Client Registration CLI Procedure Start an authenticated session by logging in with your credentials. Run commands on the Client Registration REST endpoint. For example, on: Linux: Windows: Note In a production environment, Red Hat Single Sign-On has to be accessed with https: to avoid exposing tokens to network sniffers. If a server's certificate is not issued by one of the trusted certificate authorities (CAs) that are included in Java's default certificate truststore, prepare a truststore.jks file and instruct the Client Registration CLI to use it. For example, on: Linux: Windows: 6.4.1. Logging in Procedure Specify a server endpoint URL and a realm when you log in with the Client Registration CLI. Specify a user name or a client id, which results in a special service account being used. When using a user name, you must use a password for the specified user. When using a client ID, you use a client secret or a Signed JWT instead of a password. Regardless of the login method, the account that logs in needs proper permissions to be able to perform client registration operations. Keep in mind that any account in a non-master realm can only have permissions to manage clients within the same realm. If you need to manage different realms, you can either configure multiple users in different realms, or you can create a single user in the master realm and add roles for managing clients in different realms. You cannot configure users with the Client Registration CLI. Use the Admin Console web interface or the Admin Client CLI to configure users. See Server Administration Guide for more details. When kcreg successfully logs in, it receives authorization tokens and saves them in a private configuration file so the tokens can be used for subsequent invocations. See Section 6.4.2, "Working with alternative configurations" for more information on configuration files. See the built-in help for more information on using the Client Registration CLI. For example, on: Linux: Windows: See kcreg config credentials --help for more information about starting an authenticated session. 6.4.2. Working with alternative configurations By default, the Client Registration CLI automatically maintains a configuration file at a default location, ./.keycloak/kcreg.config , under the user's home directory. You can use the --config option to point to a different file or location to maintain multiple authenticated sessions in parallel. It is the safest way to perform operations tied to a single configuration file from a single thread. Important Do not make the configuration file visible to other users on the system. The configuration file contains access tokens and secrets that should be kept private. You might want to avoid storing secrets inside a configuration file by using the --no-config option with all of your commands, even though it is less convenient and requires more token requests to do so. Specify all authentication information with each kcreg invocation. 6.4.3. Initial Access and Registration Access Tokens Developers who do not have an account configured at the Red Hat Single Sign-On server they want to use can use the Client Registration CLI. This is possible only when the realm administrator issues a developer an Initial Access Token. It is up to the realm administrator to decide how and when to issue and distribute these tokens. The realm administrator can limit the maximum age of the Initial Access Token and the total number of clients that can be created with it. Once a developer has an Initial Access Token, the developer can use it to create new clients without authenticating with kcreg config credentials . The Initial Access Token can be stored in the configuration file or specified as part of the kcreg create command. For example, on: Linux: or Windows: or When using an Initial Access Token, the server response includes a newly issued Registration Access Token. Any subsequent operation for that client needs to be performed by authenticating with that token, which is only valid for that client. The Client Registration CLI automatically uses its private configuration file to save and use this token with its associated client. As long as the same configuration file is used for all client operations, the developer does not need to authenticate to read, update, or delete a client that was created this way. See Client Registration for more information about Initial Access and Registration Access Tokens. Run the kcreg config initial-token --help and kcreg config registration-token --help commands for more information on how to configure tokens with the Client Registration CLI. 6.4.4. Creating a client configuration The first task after authenticating with credentials or configuring an Initial Access Token is usually to create a new client. Often you might want to use a prepared JSON file as a template and set or override some of the attributes. The following example shows how to read a JSON file, override any client id it may contain, set any other attributes, and print the configuration to a standard output after successful creation. Linux: Windows: Run the kcreg create --help for more information about the kcreg create command. You can use kcreg attrs to list available attributes. Keep in mind that many configuration attributes are not checked for validity or consistency. It is up to you to specify proper values. Remember that you should not have any id fields in your template and should not specify them as arguments to the kcreg create command. 6.4.5. Retrieving a client configuration You can retrieve an existing client by using the kcreg get command. For example, on: Linux: Windows: You can also retrieve the client configuration as an adapter configuration file, which you can package with your web application. For example, on: Linux: Windows: Run the kcreg get --help command for more information about the kcreg get command. 6.4.6. Modifying a client configuration There are two methods for updating a client configuration. One method is to submit a complete new state to the server after getting the current configuration, saving it to a file, editing it, and posting it back to the server. For example, on: Linux: Windows: The second method fetches the current client, sets or deletes fields on it, and posts it back in one step. For example, on: Linux: Windows: You can also use a file that contains only changes to be applied so you do not have to specify too many values as arguments. In this case, specify --merge to tell the Client Registration CLI that rather than treating the JSON file as a full, new configuration, it should treat it as a set of attributes to be applied over the existing configuration. For example, on: Linux: Windows: Run the kcreg update --help command for more information about the kcreg update command. 6.4.7. Deleting a client configuration Use the following example to delete a client. Linux: Windows: Run the kcreg delete --help command for more information about the kcreg delete command. 6.4.8. Refreshing invalid Registration Access Tokens When performing a create, read, update, and delete (CRUD) operation using the --no-config mode, the Client Registration CLI cannot handle Registration Access Tokens for you. In that case, it is possible to lose track of the most recently issued Registration Access Token for a client, which makes it impossible to perform any further CRUD operations on that client without authenticating with an account that has manage-clients permissions. If you have permissions, you can issue a new Registration Access Token for the client and have it printed to a standard output or saved to a configuration file of your choice. Otherwise, you have to ask the realm administrator to issue a new Registration Access Token for your client and send it to you. You can then pass it to any CRUD command via the --token option. You can also use the kcreg config registration-token command to save the new token in a configuration file and have the Client Registration CLI automatically handle it for you from that point on. Run the kcreg update-token --help command for more information about the kcreg update-token command. 6.5. Troubleshooting Q: When logging in, I get an error: Parameter client_assertion_type is missing [invalid_client] . A: This error means your client is configured with Signed JWT token credentials, which means you have to use the --keystore parameter when logging in.
[ "export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcreg.sh", "c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcreg", "kcreg.sh config credentials --server http://localhost:8080/auth --realm demo --user user --client reg-cli kcreg.sh create -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' kcreg.sh get my_client", "c:\\> kcreg config credentials --server http://localhost:8080/auth --realm demo --user user --client reg-cli c:\\> kcreg create -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" c:\\> kcreg get my_client", "kcreg.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks", "c:\\> kcreg config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks", "kcreg.sh help", "c:\\> kcreg help", "kcreg.sh config initial-token USDTOKEN kcreg.sh create -s clientId=myclient", "kcreg.sh create -s clientId=myclient -t USDTOKEN", "c:\\> kcreg config initial-token %TOKEN% c:\\> kcreg create -s clientId=myclient", "c:\\> kcreg create -s clientId=myclient -t %TOKEN%", "kcreg.sh create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s 'redirectUris=[\"/myclient/*\"]' -o", "C:\\> kcreg create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s \"redirectUris=[\\\"/myclient/*\\\"]\" -o", "kcreg.sh get myclient", "C:\\> kcreg get myclient", "kcreg.sh get myclient -e install > keycloak.json", "C:\\> kcreg get myclient -e install > keycloak.json", "kcreg.sh get myclient > myclient.json vi myclient.json kcreg.sh update myclient -f myclient.json", "C:\\> kcreg get myclient > myclient.json C:\\> notepad myclient.json C:\\> kcreg update myclient -f myclient.json", "kcreg.sh update myclient -s enabled=false -d redirectUris", "C:\\> kcreg update myclient -s enabled=false -d redirectUris", "kcreg.sh update myclient --merge -d redirectUris -f mychanges.json", "C:\\> kcreg update myclient --merge -d redirectUris -f mychanges.json", "kcreg.sh delete myclient", "C:\\> kcreg delete myclient" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/securing_applications_and_services_guide/client_registration_cli
Chapter 3. Types of instance storage
Chapter 3. Types of instance storage The virtual storage that is available to an instance is defined by the flavor used to launch the instance. The following virtual storage resources can be associated with an instance: Instance disk Ephemeral storage Swap storage Persistent block storage volumes Config drive 3.1. Instance disk The instance disk created to store instance data depends on the boot source that you use to create the instance. The instance disk of an instance that you boot from an image is controlled by the Compute service and deleted when the instance is deleted. The instance disk of an instance that you boot from a volume is a persistent volume provided by the Block Storage service. 3.2. Instance ephemeral storage You can specify that an ephemeral disk is created for the instance by choosing a flavor that configures an ephemeral disk. This ephemeral storage is an empty additional disk that is available to an instance. This storage value is defined by the instance flavor. The default value is 0, meaning that no secondary ephemeral storage is created. The ephemeral disk appears in the same way as a plugged-in hard drive or thumb drive. It is available as a block device, which you can check using the lsblk command. You can mount it and use it however you normally use a block device. You cannot preserve or reference that disk beyond the instance it is attached to. Note Ephemeral storage data is not included in instance snapshots, and is not available on instances that are shelved and then unshelved. 3.3. Instance swap storage You can specify that a swap disk is created for the instance by choosing a flavor that configures a swap disk. This swap storage is an additional disk that is available to the instance for use as swap space for the running operating system. 3.4. Instance block storage A block storage volume is persistent storage that is available to an instance regardless of the state of the running instance. You can attach multiple block devices to an instance, one of which can be a bootable volume. Note When you use a block storage volume for your instance disk data, the block storage volume persists for any instance rebuilds, even when an instance is rebuilt with a new image that requests that a new volume is created. 3.5. Config drive You can attach a config drive to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it. You can use the config drive as a source for cloud-init information. Config drives are useful when combined with cloud-init for server bootstrapping, and when you want to pass large files to your instances. For example, you can configure cloud-init to automatically mount the config drive and run the setup scripts during the initial instance boot. Config drives are created with the volume label of config-2 , and attached to the instance when it boots. The contents of any additional files passed to the config drive are added to the user_data file in the openstack/{version}/ directory of the config drive. cloud-init retrieves the user data from this file.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/con_types-of-instance-storage_osp
Chapter 6. Managing projects
Chapter 6. Managing projects As a cloud administrator, you can create and manage projects. A project is a pool of shared virtual resources, to which you can assign OpenStack users and groups. You can configure the quota of shared virtual resources in each project. You can create multiple projects with Red Hat OpenStack Platform that will not interfere with each other's permissions and resources. Users can be associated with more than one project. Each user must have a role assigned for each project to which they are assigned. 6.1. Creating a project Create a project, add members to the project and set resource limits for the project. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . Click Create Project . On the Project Information tab, enter a name and description for the project. The Enabled check box is selected by default. On the Project Members tab, add members to the project from the All Users list. On the Quotas tab, specify resource limits for the project. Click Create Project . 6.2. Editing a project You can edit a project to change its name or description, enable or temporarily disable it, or update the members in the project. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Edit Project . In the Edit Project window, you can update a project to change its name or description, and enable or temporarily disable the project. On the Project Members tab, add members to the project, or remove them as needed. Click Save . Note The Enabled check box is selected by default. To temporarily disable the project, clear the Enabled check box. To enable a disabled project, select the Enabled check box. 6.3. Deleting a project Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . Select the project that you want to delete. Click Delete Projects . The Confirm Delete Projects window is displayed. Click Delete Projects to confirm the action. The project is deleted and any user pairing is disassociated. 6.4. Updating project quotas Quotas are operational limits that you set for each project to optimize cloud resources. You can set quotas to prevent project resources from being exhausted without notification. You can enforce quotas at both the project and the project-user level. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Modify Quotas . In the Quota tab, modify project quotas as needed. Click Save . Note At present, nested quotas are not yet supported. As such, you must manage quotas individually against projects and subprojects. 6.5. Changing the active project Set a project as the active project so that you can use the dashboard to interact with objects in the project. To set a project as the active project, you must be a member of the project. It is also necessary for the user to be a member of more than one project to have the Set as Active Project option be enabled. You cannot set a disabled project as active, unless it is re-enabled. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Set as Active Project . Alternatively, as a non-admin user, in the project Actions column, click Set as Active Project which becomes the default action in the column. 6.6. Project hierarchies You can nest projects using multitenancy in the Identity service (keystone). Multitenancy allows subprojects to inherit role assignments from a parent project. 6.6.1. Creating hierarchical projects and sub-projects You can implement Hierarchical Multitenancy (HMT) using keystone domains and projects. First create a new domain and then create a project within that domain. You can then add subprojects to that project. You can also promote a user to administrator of a subproject by adding the user to the admin role for that subproject. Note The HMT structure used by keystone is not currently represented in the dashboard. Procedure Create a new keystone domain called corp : Create the parent project ( private-cloud ) within the corp domain: Create a subproject ( dev ) within the private-cloud parent project, while also specifying the corp domain: Create another subproject called qa : Note You can use the Identity API to view the project hierarchy. For more information, see https://developer.openstack.org/api-ref/identity/v3/index.html?expanded=show-project-details-detail 6.6.2. Configuring access to hierarchical projects By default, a newly-created project has no assigned roles. When you assign role permissions to the parent project, you can include the --inherited flag to instruct the subprojects to inherit the assigned permissions from the parent project. For example, a user with admin role access to the parent project also has admin access to the subprojects. Granting access to users View the existing permissions assigned to a project: View the existing roles: Grant the user account user1 access to the private-cloud project: Re-run this command using the --inherited flag. As a result, user1 also has access to the private-cloud subprojects, which have inherited the role assignment: Review the result of the permissions update: The user1 user has inherited access to the qa and dev projects. In addition, because the --inherited flag was applied to the parent project, user1 also receives access to any subprojects that are created later. Removing access from users Explicit and inherited permissions must be separately removed. Remove a user from an explicitly assigned role: Review the result of the change. Notice that the inherited permissions are still present: Remove the inherited permissions: Review the result of the change. The inherited permissions have been removed, and the resulting output is now empty: 6.6.3. Reseller project overview With the Reseller project, the goal is to have a hierarchy of domains; these domains will eventually allow you to consider reselling portions of the cloud, with a subdomain representing a fully-enabled cloud. This work has been split into phases, with phase 1 described below: Phase 1 of reseller Reseller (phase 1) is an extension of Hierarchical Multitenancy (HMT), described here: Creating hierarchical projects and sub-projects . Previously, keystone domains were originally intended to be containers that stored users and projects, with their own table in the database back-end. As a result, domains are now no longer stored in their own table, and have been merged into the project table: A domain is now a type of project, distinguished by the is_domain flag. A domain represents a top-level project in the project hierarchy: domains are roots in the project hierarchy APIs have been updated to create and retrieve domains using the projects subpath: Create a new domain by creating a project with the is_domain flag set to true List projects that are domains: get projects including the is_domain query parameter. 6.7. Project security management Security groups are sets of IP filter rules that can be assigned to project instances, and which define networking access to the instance. Security groups are project specific; project members can edit the default rules for their security group and add new rule sets. All projects have a default security group that is applied to any instance that has no other defined security group. Unless you change the default values, this security group denies all incoming traffic and allows only outgoing traffic from your instance. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Do not delete the default security group without creating groups that allow required egress. For example, if your instances use DHCP and metadata, your instance requires security group rules that allow egress to the DHCP server and metadata agent. 6.7.1. Creating a security group Create a security group so that you can configure security rules. For example, you can enable ICMP traffic, or disable HTTP requests. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Create Security Group . Enter a name and description for the group, and click Create Security Group . 6.7.2. Adding a security group rule By default, rules for a new group only provide outgoing access. You must add new rules to provide additional access. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Manage Rules for the security group that you want to edit. Click Add Rule to add a new rule. Specify the rule values, and click Add . The following rule fields are required: Rule Rule type. If you specify a rule template (for example, 'SSH'), its fields are automatically filled in: TCP: Typically used to exchange data between systems, and for end-user communication. UDP: Typically used to exchange data between systems, particularly at the application level. ICMP: Typically used by network devices, such as routers, to send error or monitoring messages. Direction Ingress (inbound) or Egress (outbound). Open Port For TCP or UDP rules, the Port or Port Range (single port or range of ports) to open: For a range of ports, enter port values in the From Port and To Port fields. For a single port, enter the port value in the Port field. Type The type for ICMP rules; must be in the range '-1:255'. Code The code for ICMP rules; must be in the range '-1:255'. Remote The traffic source for this rule: CIDR (Classless Inter-Domain Routing): IP address block, which limits access to IPs within the block. Enter the CIDR in the Source field. Security Group: Source group that enables any instance in the group to access any other group instance. 6.7.3. Deleting a security group rule Delete security group rules that you no longer require. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Manage Rules for the security group. Select the security group rule, and click Delete Rule . Click Delete Rule again. Note You cannot undo the delete action. 6.7.4. Deleting a security group Delete security groups that you no longer require. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, select the group, and click Delete Security Groups . Click Delete Security Groups . Note You cannot undo the delete action.
[ "openstack domain create corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 69436408fdcb44ab9e111691f8e9216d | | name | corp | +-------------+----------------------------------+", "openstack project create private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | c50d5cf4fe2e4929b98af5abdec3fd64 | | is_domain | False | | name | private-cloud | | parent_id | 69436408fdcb44ab9e111691f8e9216d | +-------------+----------------------------------+", "openstack project create dev --parent private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | 11fccd8369824baa9fc87cf01023fd87 | | is_domain | False | | name | dev | | parent_id | c50d5cf4fe2e4929b98af5abdec3fd64 | +-------------+----------------------------------+", "openstack project create qa --parent private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | b4f1d6f59ddf413fa040f062a0234871 | | is_domain | False | | name | qa | | parent_id | c50d5cf4fe2e4929b98af5abdec3fd64 | +-------------+----------------------------------+", "openstack role assignment list --project private-cloud", "openstack role list +----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+", "openstack role add --user user1 --user-domain corp --project private-cloud member", "openstack role add --user user1 --user-domain corp --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | c50d5cf4fe2e4929b98af5abdec3fd64 | | False | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member", "openstack role assignment list --effective --user user1 --user-domain corp +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_openstack_identity_resources/assembly_managing-projects
Director Installation and Usage
Director Installation and Usage Red Hat OpenStack Platform 17.0 An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/index
Chapter 1. Setting up and configuring a BIND DNS server
Chapter 1. Setting up and configuring a BIND DNS server BIND is a feature-rich DNS server that is fully compliant with the Internet Engineering Task Force (IETF) DNS standards and draft standards. For example, administrators frequently use BIND as: Caching DNS server in the local network Authoritative DNS server for zones Secondary server to provide high availability for zones 1.1. Considerations about protecting BIND with SELinux or running it in a change-root environment To secure a BIND installation, you can: Run the named service without a change-root environment. In this case, SELinux in enforcing mode prevents exploitation of known BIND security vulnerabilities. By default, Red Hat Enterprise Linux uses SELinux in enforcing mode. Important Running BIND on RHEL with SELinux in enforcing mode is more secure than running BIND in a change-root environment. Run the named-chroot service in a change-root environment. Using the change-root feature, administrators can define that the root directory of a process and its sub-processes is different to the / directory. When you start the named-chroot service, BIND switches its root directory to /var/named/chroot/ . As a consequence, the service uses mount --bind commands to make the files and directories listed in /etc/named-chroot.files available in /var/named/chroot/ , and the process has no access to files outside of /var/named/chroot/ . If you decide to use BIND: In normal mode, use the named service. In a change-root environment, use the named-chroot service. This requires that you install, additionally, the named-chroot package. Additional resources The Red Hat SELinux BIND security profile section in the named(8) man page on your system 1.2. The BIND Administrator Reference Manual The comprehensive BIND Administrator Reference Manual , that is included in the bind package, provides: Configuration examples Documentation on advanced features A configuration reference Security considerations To display the BIND Administrator Reference Manual on a host that has the bind package installed, open the /usr/share/doc/bind/Bv9ARM.html file in a browser. 1.3. Configuring BIND as a caching DNS server By default, the BIND DNS server resolves and caches successful and failed lookups. The service then answers requests to the same records from its cache. This significantly improves the speed of DNS lookups. Prerequisites The IP address of the server is static. Procedure Install the bind and bind-utils packages: These packages provide BIND 9.11. If you require BIND 9.16, install the bind9.16 and bind9.16-utils packages. If you want to run BIND in a change-root environment install the bind-chroot package: Note that running BIND on a host with SELinux in enforcing mode, which is default, is more secure. Edit the /etc/named.conf file, and make the following changes in the options statement: Update the listen-on and listen-on-v6 statements to specify on which IPv4 and IPv6 interfaces BIND should listen: Update the allow-query statement to configure from which IP addresses and ranges clients can query this DNS server: Add an allow-recursion statement to define from which IP addresses and ranges BIND accepts recursive queries: Warning Do not allow recursion on public IP addresses of the server. Otherwise, the server can become part of large-scale DNS amplification attacks. By default, BIND resolves queries by recursively querying from the root servers to an authoritative DNS server. Alternatively, you can configure BIND to forward queries to other DNS servers, such as the ones of your provider. In this case, add a forwarders statement with the list of IP addresses of the DNS servers that BIND should forward queries to: As a fall-back behavior, BIND resolves queries recursively if the forwarder servers do not respond. To disable this behavior, add a forward only; statement. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Update the firewalld rules to allow incoming DNS traffic: Start and enable BIND: If you want to run BIND in a change-root environment, use the systemctl enable --now named-chroot command to enable and start the service. Verification Use the newly set up DNS server to resolve a domain: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. After querying a record for the first time, BIND adds the entry to its cache. Repeat the query: Because of the cached entry, further requests for the same record are significantly faster until the entry expires. steps Configure the clients in your network to use this DNS server. If a DHCP server provides the DNS server setting to the clients, update the DHCP server's configuration accordingly. Additional resources Considerations about protecting BIND with SELinux or running it in a change-root environment named.conf(5) man page on your system /usr/share/doc/bind/sample/etc/named.conf The BIND Administrator Reference Manual 1.4. Configuring logging on a BIND DNS server The configuration in the default /etc/named.conf file, as provided by the bind package, uses the default_debug channel and logs messages to the /var/named/data/named.run file. The default_debug channel only logs entries when the server's debug level is non-zero. Using different channels and categories, you can configure BIND to write different events with a defined severity to separate files. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file, and add category and channel phrases to the logging statement, for example: With this example configuration, BIND logs messages related to zone transfers to /var/named/log/transfer.log . BIND creates up to 10 versions of the log file and rotates them if they reach a maximum size of 50 MB. The category phrase defines to which channels BIND sends messages of a category. The channel phrase defines the destination of log messages including the number of versions, the maximum file size, and the severity level BIND should log to a channel. Additional settings, such as enabling logging the time stamp, category, and severity of an event are optional, but useful for debugging purposes. Create the log directory if it does not exist, and grant write permissions to the named user on this directory: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Restart BIND: If you run BIND in a change-root environment, use the systemctl restart named-chroot command to restart the service. Verification Display the content of the log file: Additional resources named.conf(5) man page on your system The BIND Administrator Reference Manual 1.5. Writing BIND ACLs Controlling access to certain features of BIND can prevent unauthorized access and attacks, such as denial of service (DoS). BIND access control list ( acl ) statements are lists of IP addresses and ranges. Each ACL has a nickname that you can use in several statements, such as allow-query , to refer to the specified IP addresses and ranges. Warning BIND uses only the first matching entry in an ACL. For example, if you define an ACL { 192.0.2/24; !192.0.2.1; } and the host with IP address 192.0.2.1 connects, access is granted even if the second entry excludes this address. BIND has the following built-in ACLs: none : Matches no hosts. any : Matches all hosts. localhost : Matches the loopback addresses 127.0.0.1 and ::1 , as well as the IP addresses of all interfaces on the server that runs BIND. localnets : Matches the loopback addresses 127.0.0.1 and ::1 , as well as all subnets the server that runs BIND is directly connected to. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file and make the following changes: Add acl statements to the file. For example, to create an ACL named internal-networks for 127.0.0.1 , 192.0.2.0/24 , and 2001:db8:1::/64 , enter: Use the ACL's nickname in statements that support them, for example: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Execute an action that triggers a feature which uses the configured ACL. For example, the ACL in this procedure allows only recursive queries from the defined IP addresses. In this case, enter the following command on a host that is not within the ACL's definition to attempt resolving an external domain: If the command returns no output, BIND denied access, and the ACL works. For a verbose output on the client, use the command without +short option: Additional resources The Access control lists section in the The BIND Administrator Reference Manual . 1.6. Configuring zones on a BIND DNS server A DNS zone is a database with resource records for a specific sub-tree in the domain space. For example, if you are responsible for the example.com domain, you can set up a zone for it in BIND. As a result, clients can, resolve www.example.com to the IP address configured in this zone. 1.6.1. The SOA record in zone files The start of authority (SOA) record is a required record in a DNS zone. This record is important, for example, if multiple DNS servers are authoritative for a zone but also to DNS resolvers. A SOA record in BIND has the following syntax: For better readability, administrators typically split the record in zone files into multiple lines with comments that start with a semicolon ( ; ). Note that, if you split a SOA record, parentheses keep the record together: Important Note the trailing dot at the end of the fully-qualified domain names (FQDNs). FQDNs consist of multiple domain labels, separated by dots. Because the DNS root has an empty label, FQDNs end with a dot. Therefore, BIND appends the zone name to names without a trailing dot. A hostname without a trailing dot, for example, ns1.example.com would be expanded to ns1.example.com.example.com. , which is not the correct address of the primary name server. These are the fields in a SOA record: name : The name of the zone, the so-called origin . If you set this field to @ , BIND expands it to the zone name defined in /etc/named.conf . class : In SOA records, you must set this field always to Internet ( IN ). type : In SOA records, you must set this field always to SOA . mname (master name): The hostname of the primary name server of this zone. rname (responsible name): The email address of who is responsible for this zone. Note that the format is different. You must replace the at sign ( @ ) with a dot ( . ). serial : The version number of this zone file. Secondary name servers only update their copies of the zone if the serial number on the primary server is higher. The format can be any numeric value. A commonly-used format is <year><month><day><two-digit-number> . With this format, you can, theoretically, change the zone file up to a hundred times per day. refresh : The amount of time secondary servers should wait before checking the primary server if the zone was updated. retry : The amount of time after that a secondary server retries to query the primary server after a failed attempt. expire : The amount of time after that a secondary server stops querying the primary server, if all attempts failed. minimum : RFC 2308 changed the meaning of this field to the negative caching time. Compliant resolvers use it to determine how long to cache NXDOMAIN name errors. Note A numeric value in the refresh , retry , expire , and minimum fields define a time in seconds. However, for better readability, use time suffixes, such as m for minute, h for hours, and d for days. For example, 3h stands for 3 hours. Additional resources RFC 1035 : Domain names - implementation and specification RFC 1034 : Domain names - concepts and facilities RFC 2308 : Negative caching of DNS queries (DNS cache) 1.6.2. Setting up a forward zone on a BIND primary server Forward zones map names to IP addresses and other information. For example, if you are responsible for the domain example.com , you can set up a forward zone in BIND to resolve names, such as www.example.com . Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Add a zone definition to the /etc/named.conf file: These settings define: This server as the primary server ( type master ) for the example.com zone. The /var/named/example.com.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/example.com.zone file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required SOA resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Sets mail.example.com as the mail exchanger ( MX ) of the example.com domain. The numeric value in front of the host name is the priority of the record. Entries with a lower value have a higher priority. Sets the IPv4 and IPv6 addresses of www.example.com , mail.example.com , and ns1.example.com . Set secure permissions on the zone file that allow only the named group to read it: Verify the syntax of the /var/named/example.com.zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query different records from the example.com zone, and verify that the output matches the records you have configured in the zone file: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Writing BIND ACLs The BIND Administrator Reference Manual RFC 1912 - Common DNS operational and configuration errors 1.6.3. Setting up a reverse zone on a BIND primary server Reverse zones map IP addresses to names. For example, if you are responsible for IP range 192.0.2.0/24 , you can set up a reverse zone in BIND to resolve IP addresses from this range to hostnames. Note If you create a reverse zone for whole classful networks, name the zone accordingly. For example, for the class C network 192.0.2.0/24 , the name of the zone is 2.0.192.in-addr.arpa . If you want to create a reverse zone for a different network size, for example 192.0.2.0/28 , the name of the zone is 28-2.0.192.in-addr.arpa . Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Add a zone definition to the /etc/named.conf file: These settings define: This server as the primary server ( type master ) for the 2.0.192.in-addr.arpa reverse zone. The /var/named/2.0.192.in-addr.arpa.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/2.0.192.in-addr.arpa.zone file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required SOA resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this reverse zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Sets the pointer ( PTR ) record for the 192.0.2.1 and 192.0.2.30 addresses. Set secure permissions on the zone file that only allow the named group to read it: Verify the syntax of the /var/named/2.0.192.in-addr.arpa.zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query different records from the reverse zone, and verify that the output matches the records you have configured in the zone file: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Writing BIND ACLs The BIND Administrator Reference Manual RFC 1912 - Common DNS operational and configuration errors 1.6.4. Updating a BIND zone file In certain situations, for example if an IP address of a server changes, you must update a zone file. If multiple DNS servers are responsible for a zone, perform this procedure only on the primary server. Other DNS servers that store a copy of the zone will receive the update through a zone transfer. Prerequisites The zone is configured. The named or named-chroot service is running. Procedure Optional: Identify the path to the zone file in the /etc/named.conf file: You find the path to the zone file in the file statement in the zone's definition. A relative path is relative to the directory set in directory in the options statement. Edit the zone file: Make the required changes. Increment the serial number in the start of authority (SOA) record. Important If the serial number is equal to or lower than the value, secondary servers will not update their copy of the zone. Verify the syntax of the zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query the record you have added, modified, or removed, for example: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server 1.6.5. DNSSEC zone signing using the automated key generation and zone maintenance features You can sign zones with domain name system security extensions (DNSSEC) to ensure authentication and data integrity. Such zones contain additional resource records. Clients can use them to verify the authenticity of the zone information. If you enable the DNSSEC policy feature for a zone, BIND performs the following actions automatically: Creates the keys Signs the zone Maintains the zone, including re-signing and periodically replacing the keys. Important To enable external DNS servers to verify the authenticity of a zone, you must add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this. This procedure uses the built-in default DNSSEC policy in BIND. This policy uses single ECDSAP256SHA key signatures. Alternatively, create your own policy to use custom keys, algorithms, and timings. Prerequisites BIND 9.16 or later is installed. To meet this requirement, install the bind9.16 package instead of bind . The zone for which you want to enable DNSSEC is configured. The named or named-chroot service is running. The server synchronizes the time with a time server. An accurate system time is important for DNSSEC validation. Procedure Edit the /etc/named.conf file, and add dnssec-policy default; to the zone for which you want to enable DNSSEC: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. BIND stores the public key in the /var/named/K <zone_name> .+ <algorithm> + <key_ID> .key file. Use this file to display the public key of the zone in the format that the parent zone requires: DS record format: DNSKEY format: Request to add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this. Verification Query your own DNS server for a record from the zone for which you enabled DNSSEC signing: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. After the public key has been added to the parent zone and propagated to other servers, verify that the server sets the authenticated data ( ad ) flag on queries to the signed zone: Additional resources Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server 1.7. Configuring zone transfers among BIND DNS servers Zone transfers ensure that all DNS servers that have a copy of the zone use up-to-date data. Prerequisites On the future primary server, the zone for which you want to set up zone transfers is already configured. On the future secondary server, BIND is already configured, for example, as a caching name server. On both servers, the named or named-chroot service is running. Procedure On the existing primary server: Create a shared key, and append it to the /etc/named.conf file: This command displays the output of the tsig-keygen command and automatically appends it to /etc/named.conf . You will require the output of the command later on the secondary server as well. Edit the zone definition in the /etc/named.conf file: In the allow-transfer statement, define that servers must provide the key specified in the example-transfer-key statement to transfer a zone: Alternatively, use BIND access control list (ACL) nicknames in the allow-transfer statement. By default, after a zone has been updated, BIND notifies all name servers which have a name server ( NS ) record in this zone. If you do not plan to add an NS record for the secondary server to the zone, you can, configure that BIND notifies this server anyway. For that, add the also-notify statement with the IP addresses of this secondary server to the zone: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. On the future secondary server: Edit the /etc/named.conf file as follows: Add the same key definition as on the primary server: Add the zone definition to the /etc/named.conf file: These settings state: This server is a secondary server ( type slave ) for the example.com zone. The /var/named/slaves/example.com.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. To separate zone files for which this server is secondary from primary ones, you can store them, for example, in the /var/named/slaves/ directory. Any host can query this zone. Alternatively, specify IP ranges or ACL nicknames to limit the access. No host can transfer the zone from this server. The IP addresses of the primary server of this zone are 192.0.2.1 and 2001:db8:1::2 . Alternatively, you can specify ACL nicknames. This secondary server will use the key named example-transfer-key to authenticate to the primary server. Verify the syntax of the /etc/named.conf file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Optional: Modify the zone file on the primary server and add an NS record for the new secondary server. Verification On the secondary server: Display the systemd journal entries of the named service: If you run BIND in a change-root environment, use the journalctl -u named-chroot command to display the journal entries. Verify that BIND created the zone file: Note that, by default, secondary servers store zone files in a binary raw format. Query a record of the transferred zone from the secondary server: This example assumes that the secondary server you set up in this procedure listens on IP address 192.0.2.2 . Additional resources Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server Writing BIND ACLs Updating a BIND zone file 1.8. Configuring response policy zones in BIND to override DNS records Using DNS blocking and filtering, administrators can rewrite a DNS response to block access to certain domains or hosts. In BIND, response policy zones (RPZs) provide this feature. You can configure different actions for blocked entries, such as returning an NXDOMAIN error or not responding to the query. If you have multiple DNS servers in your environment, use this procedure to configure the RPZ on the primary server, and later configure zone transfers to make the RPZ available on your secondary servers. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file, and make the following changes: Add a response-policy definition to the options statement: You can set a custom name for the RPZ in the zone statement in response-policy . However, you must use the same name in the zone definition in the step. Add a zone definition for the RPZ you set in the step: These settings state: This server is the primary server ( type master ) for the RPZ named rpz.local . The /var/named/rpz.local file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any hosts defined in allow-query can query this RPZ. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/rpz.local file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 10 minutes. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required start of authority (SOA) resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Return an NXDOMAIN error for queries to example.org and hosts in this domain. Drop queries to example.net and hosts in this domain. For a full list of actions and examples, see IETF draft: DNS Response Policy Zones (RPZ) . Verify the syntax of the /var/named/rpz.local file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Attempt to resolve a host in example.org , that is configured in the RPZ to return an NXDOMAIN error: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Attempt to resolve a host in the example.net domain, that is configured in the RPZ to drop queries: Additional resources IETF draft: DNS Response Policy Zones (RPZ) 1.9. Bind Migration from RHEL 7 to RHEL 8 To migrate the BIND from RHEL 7 to RHEL 8 you need to adjust the bind configuration in the following ways : Remove the dnssec-lookaside auto configuration option. BIND will listen on any configured IPv6 addresses by default because the default value for the listen-on-v6 configuration option has been changed to any from none . Multiple zones cannot share the same zone file when updates to its zone are allowed. If you need to use the same file in multiple zone definitions, ensure that allow-updates uses only none . Do not use non-empty update-policy or enable inline-signing , otherwise use in-view clause to share the zone. Updated command-line options, default behavior and output formats : The number of UDP listeners employed per interface has been changed to be a function of the number of processors. You can override it by using the -U argument to BIND . The XML format used in the statistics-channel has been changed. The rndc flushtree option now flushes DNSSEC validation failures as well as specific name records. You must use the /etc/named.root.key file instead of the /etc/named.iscdlv.key file. The /etc/named.iscdlv.key file is not available anymore. The querylog format has been changed to include a memory address of the client object. It can be helpful in debugging. The named and dig utility now send a DNS COOKIE (RFC 7873) by default, which might break on restrictive firewall or intrusion detection system. You can change this behaviour by using the send-cookie configuration option. The dig utility can display the Extended DNS Errors (EDE, RFC 8914) in a text format. 1.10. Recording DNS queries by using dnstap As a network administrator, you can record Domain Name System (DNS) details to analyze DNS traffic patterns, monitor DNS server performance, and troubleshoot DNS issues. If you want an advanced way to monitor and log details of incoming name queries, use the dnstap interface that records sent messages from the named service. You can capture and record DNS queries to collect information about websites or IP addresses. Prerequisites The bind-9.11.26-2 package or a later version is installed. Warning If you already have a BIND version installed and running, adding a new version of BIND will overwrite the existing version. Procedure Enable dnstap and the target file by editing the /etc/named.conf file in the options block: To specify which types of DNS traffic you want to log, add dnstap filters to the dnstap block in the /etc/named.conf file. You can use the following filters: auth - Authoritative zone response or answer. client - Internal client query or answer. forwarder - Forwarded query or response from it. resolver - Iterative resolution query or response. update - Dynamic zone update requests. all - Any from the above options. query or response - If you do not specify a query or a response keyword, dnstap records both. Note The dnstap filter contains multiple definitions delimited by a ; in the dnstap {} block with the following syntax: dnstap { ( all | auth | client | forwarder | resolver | update ) [ ( query | response ) ]; ... }; To apply your changes, restart the named service: Configure a periodic rollout for active logs In the following example, the cron scheduler runs the content of the user-edited script once a day. The roll option with the value 3 specifies that dnstap can create up to three backup log files. The value 3 overrides the version parameter of the dnstap-output variable, and limits the number of backup log files to three. Additionally, the binary log file is moved to another directory and renamed, and it never reaches the .2 suffix, even if three backup log files already exist. You can skip this step if automatic rolling of binary logs based on size limit is sufficient. Handle and analyze logs in a human-readable format by using the dnstap-read utility: In the following example, the dnstap-read utility prints the output in the YAML file format.
[ "yum install bind bind-utils", "yum install bind-chroot", "listen-on port 53 { 127.0.0.1; 192.0.2.1; }; listen-on-v6 port 53 { ::1; 2001:db8:1::1; };", "allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; };", "allow-recursion { localhost; 192.0.2.0/24; 2001:db8:1::/64; };", "forwarders { 198.51.100.1; 203.0.113.5; };", "named-checkconf", "firewall-cmd --permanent --add-service=dns firewall-cmd --reload", "systemctl enable --now named", "dig @ localhost www.example.org www.example.org. 86400 IN A 198.51.100.34 ;; Query time: 917 msec", "dig @ localhost www.example.org www.example.org. 85332 IN A 198.51.100.34 ;; Query time: 1 msec", "logging { category notify { zone_transfer_log; }; category xfer-in { zone_transfer_log; }; category xfer-out { zone_transfer_log; }; channel zone_transfer_log { file \" /var/named/log/transfer.log \" versions 10 size 50m ; print-time yes; print-category yes; print-severity yes; severity info; }; };", "mkdir /var/named/log/ chown named:named /var/named/log/ chmod 700 /var/named/log/", "named-checkconf", "systemctl restart named", "cat /var/named/log/transfer.log 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR started: TSIG example-transfer-key (serial 2022070603) 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR ended", "acl internal-networks { 127.0.0.1; 192.0.2.0/24; 2001:db8:1::/64; }; acl dmz-networks { 198.51.100.0/24; 2001:db8:2::/64; };", "allow-query { internal-networks; dmz-networks; }; allow-recursion { internal-networks; };", "named-checkconf", "systemctl reload named", "dig +short @ 192.0.2.1 www.example.com", "dig @ 192.0.2.1 www.example.com ;; WARNING: recursion requested but not available", "name class type mname rname serial refresh retry expire minimum", "@ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL", "zone \" example.com \" { type master; file \" example.com.zone \"; allow-query { any; }; allow-transfer { none; }; };", "named-checkconf", "USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. IN MX 10 mail.example.com. www IN A 192.0.2.30 www IN AAAA 2001:db8:1::30 ns1 IN A 192.0.2.1 ns1 IN AAAA 2001:db8:1::1 mail IN A 192.0.2.20 mail IN AAAA 2001:db8:1::20", "chown root:named /var/named/ example.com.zone chmod 640 /var/named/ example.com.zone", "named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022070601 OK", "systemctl reload named", "dig +short @ localhost AAAA www.example.com 2001:db8:1::30 dig +short @ localhost NS example.com ns1.example.com. dig +short @ localhost A ns1.example.com 192.0.2.1", "zone \" 2.0.192.in-addr.arpa \" { type master; file \" 2.0.192.in-addr.arpa.zone \"; allow-query { any; }; allow-transfer { none; }; };", "named-checkconf", "USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. 1 IN PTR ns1.example.com. 30 IN PTR www.example.com.", "chown root:named /var/named/ 2.0.192.in-addr.arpa.zone chmod 640 /var/named/ 2.0.192.in-addr.arpa.zone", "named-checkzone 2.0.192.in-addr.arpa /var/named/2.0.192.in-addr.arpa.zone zone 2.0.192.in-addr.arpa/IN : loaded serial 2022070601 OK", "systemctl reload named", "dig +short @ localhost -x 192.0.2.1 ns1.example.com. dig +short @ localhost -x 192.0.2.30 www.example.com.", "options { directory \" /var/named \"; } zone \" example.com \" { file \" example.com.zone \"; };", "named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022062802 OK", "systemctl reload named", "dig +short @ localhost A ns2.example.com 192.0.2.2", "zone \" example.com \" { dnssec-policy default; };", "systemctl reload named", "dnssec-dsfromkey /var/named/K example.com.+013+61141 .key example.com. IN DS 61141 13 2 3E184188CF6D2521EDFDC3F07CFEE8D0195AACBD85E68BAE0620F638B4B1B027", "grep DNSKEY /var/named/K example.com.+013+61141.key example.com. 3600 IN DNSKEY 257 3 13 sjzT3jNEp120aSO4mPEHHSkReHUf7AABNnT8hNRTzD5cKMQSjDJin2I3 5CaKVcWO1pm+HltxUEt+X9dfp8OZkg==", "dig +dnssec +short @ localhost A www.example.com 192.0.2.30 A 13 3 28800 20220718081258 20220705120353 61141 example.com. e7Cfh6GuOBMAWsgsHSVTPh+JJSOI/Y6zctzIuqIU1JqEgOOAfL/Qz474 M0sgi54m1Kmnr2ANBKJN9uvOs5eXYw==", "dig @ localhost example.com +dnssec ;; flags: qr rd ra ad ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1", "tsig-keygen example-transfer-key | tee -a /etc/named.conf key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };", "zone \" example.com \" { allow-transfer { key example-transfer-key; }; };", "zone \" example.com \" { also-notify { 192.0.2.2; 2001:db8:1::2; }; };", "named-checkconf", "systemctl reload named", "key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };", "zone \" example.com \" { type slave; file \" slaves/example.com.zone \"; allow-query { any; }; allow-transfer { none; }; masters { 192.0.2.1 key example-transfer-key; 2001:db8:1::1 key example-transfer-key; }; };", "named-checkconf", "systemctl reload named", "journalctl -u named Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: Transfer started. Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: connected using 192.0.2.2#45803 Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: transferred serial 2022070101 Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer status: success Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer completed: 1 messages, 29 records, 2002 bytes, 0.003 secs (667333 bytes/sec)", "ls -l /var/named/slaves/ total 4 -rw-r--r--. 1 named named 2736 Jul 6 15:08 example.com.zone", "dig +short @ 192.0.2.2 AAAA www.example.com 2001:db8:1::30", "options { response-policy { zone \" rpz.local \"; }; }", "zone \"rpz.local\" { type master; file \"rpz.local\"; allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; }; allow-transfer { none; }; };", "named-checkconf", "USDTTL 10m @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1h ; refresh period 1m ; retry period 3d ; expire time 1m ) ; minimum TTL IN NS ns1.example.com. example.org IN CNAME . *.example.org IN CNAME . example.net IN CNAME rpz-drop. *.example.net IN CNAME rpz-drop.", "named-checkzone rpz.local /var/named/rpz.local zone rpz.local/IN : loaded serial 2022070601 OK", "systemctl reload named", "dig @localhost www.example.org ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN , id: 30286", "dig @localhost www.example.net ;; connection timed out; no servers could be reached", "options { dnstap { all; }; # Configure filter dnstap-output file \"/var/named/data/dnstap.bin\"; }; end of options", "systemctl restart named.service", "Example: sudoedit /etc/cron.daily/dnstap #!/bin/sh rndc dnstap -roll 3 mv /var/named/data/dnstap.bin.1 /var/log/named/dnstap/dnstap-USD(date -I).bin use dnstap-read to analyze saved logs sudo chmod a+x /etc/cron.daily/dnstap", "Example: dnstap-read -y [file-name]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_networking_infrastructure_services/assembly_setting-up-and-configuring-a-bind-dns-server_networking-infrastructure-services
4.211. openswan
4.211. openswan 4.211.1. RHBA-2011:1761 - openswan bug fix and enhancement update An updated openswan package that fixes several bugs and adds one enhancement is now available for Red Hat Enterprise Linux 6. Openswan is a free implementation of IPsec (Internet Protocol Security) and IKE (Internet Key Exchange) for Linux. The openswan package contains the daemons and user space tools for setting up Openswan. It supports the NETKEY/XFRM IPsec kernel stack that exists in the default Linux kernel. Openswan 2.6.x also supports IKEv2 (RFC4306). Bug Fixes BZ# 703473 Openswan did not handle protocol and port configuration correctly if the ports were defined and the host was defined with its hostname instead of its IP address. This update solves this issue, and Openswan now correctly sets up policies with the correct protocol and port under such circumstances. BZ# 703985 Prior to this update, very large security label strings received from a peer were being truncated. The truncated string was then still used. However, this truncated string could turn out to be a valid string, leading to an incorrect policy. Additionally, erroneous queuing of on-demand requests of setting up an IPsec connection was discovered in the IKEv2 (Internet Key Exchange) code. Although not harmful, it was not the intended design. This update fixes both of these bugs and Openswan now handles the IKE setup correctly. BZ# 704548 Previously, Openswan failed to set up AH (Authentication Header) mode security associations (SAs). This was because Openswan was erroneously processing the AH mode as if it was the ESP (Encrypted Secure Payload) mode and was expecting an encryption key. This update fixes this bug and it is now possible to set up AH mode SAs properly. BZ# 711975 IPsec connections over a loopback interface did not work properly when a specific port was configured. This was because incomplete IPsec policies were being set up, leading to connection failures. This update fixes this bug and complete policies are now established correctly. BZ# 737975 Openswan failed to support retrieving Certificate Revocation Lists (CRLs) from HTTP or LDAP CRL Distribution Points (CDPs) because the flags for enabling CRL functionality were disabled on compilation. With this update, the flags have been enabled and the CRL functionality is available as expected. BZ# 737976 Openswan failed to discover some certificates. This happened because the README.x509 file contained incorrect information on the directories to be scanned for certification files and some directories failed to be scanned. With this update, the file has been modified to provide accurate information. BZ# 738385 The Network Manager padlock icon was not cleared after a VPN connection terminated unexpectedly. This update fixes the bug and the padlock icon is cleared when a VPN connection is terminated as expected. BZ# 742632 Openswan sent wrong IKEv2 (Internet Key Exchange) ICMP (Internet Control Message Protocol) selectors to an IPsec destination. This happened due to an incorrect conversion of the host to network byte order. This update fixes this bug and Openswan now sends correct ICMP selectors. BZ# 749605 The Pluto daemon terminated unexpectedly with a segmentation fault after an IP address had been removed from one end of an established IPsec tunnel. This occurred if the other end of the tunnel attempted to reuse the particular IP address to create a new tunnel as the tunnel failed to close properly. With this update, such tunnel is closed properly and the problem no longer occurs. Enhancement BZ# 737973 On run, the "ipsec barf" and "ipsec verify" commands load new kernel modules, which influences the system configuration. This update adds the "iptable-save" command, which uses only iptables and does not load kernel modules. Users are advised to upgrade to this updated openswan package, which fixes these bugs and adds the enhancement. 4.211.2. RHBA-2012:0339 - openswan bug fix update An updated openswan package that fixes various bugs is now available for Red Hat Enterprise Linux 6. Openswan is a free implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE). IPsec uses strong cryptography to provide both authentication and encryption services. These services allow you to build secure tunnels through untrusted networks. Openswan supports the NETKEY/XFRM IPsec kernel stack that exists in the default Linux kernel. Openswan 2.6.x also supports IKEv2 (RFC4306). Bug Fixes BZ# 786434 The Openswan IKEv2 implementation did not correctly process an IKE_SA_INIT message containing an INVALID_KE_PAYLOAD Notify Payload. With this fix, Openswan now sends the INVALID_KE_PAYLOAD notify message back to the peer so that IKE_SA_INIT can restart with the correct KE payload. BZ# 786435 Previously, Openswan sometimes generated a KE payload that was 1 byte shorter than specified by the Diffie-Hellman algorithm. Consequently, IKE renegotiation failed at random intervals. An error message in the following format was logged: This update checks the length of the generated key and if it is shorter than required, leading zero bytes are added. All users of openswan are advised to upgrade to this updated package, which fixes these bugs. Note that the NSS library package needs to be version 3.13 or later for the KE payload and IKE renegotiation issues to be fully resolved. 4.211.3. RHBA-2012:0541 - openswan bug fix update Updated openswan packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. Openswan is a free implementation of IPsec (Internet Protocol Security) and IKE (Internet Key Exchange) for Linux. The openswan package contains the daemons and user space tools for setting up Openswan. It supports the NETKEY/XFRM IPsec kernel stack that exists in the default Linux kernel. Openswan 2.6 and later also supports IKEv2 (Internet Key Exchange Protocol Version 2), which is defined in RFC5996. Bug Fixes BZ# 813192 Openswan incorrectly processed traffic selector messages proposed by the responder (the endpoint responding to an initiated exchange) if the traffic selectors were confined to a subset of the initially proposed traffic selectors. As consequence, Openswan set up CHILD security associations (SAs) incorrectly. With this update, Openswan initiates a new connection for the reduced set of traffic selectors, and sets up IKE CHILD SAs accordingly. BZ# 813194 Openswan incorrectly processed traffic selector messages proposed by the initiator (the endpoint which started an exchange) if the traffic selectors were confined to a subset of the initially proposed traffic selectors. As a consequence, Openswan set up CHILD SAs incorrectly. With this update, Openswan initiates a new connection for the reduced set of traffic selectors, and sets up IKE CHILD SAs accordingly. BZ# 813355 When processing an IKE_AUTH exchange and the RESERVED field of the IKE_AUTH request or response messages was modified, Openswan did not ignore the field as expected according to the IKEv2 RFC5996 specification. Consequently, the IKE_AUTH messages were processed as erroneous messages by Openswan and the IKE_AUTH exchange failed. With this update, Openswan has been modified to ignore reserved fields as expected and IKE_AUTH exchanges succeed in this scenario. BZ# 813356 When processing an IKE_SA_INIT exchange and the RESERVED field of the IKE_SA_INIT request or response messages was modified, Openswan did not ignore the field as expected according to the IKEv2 RFC5996 specification. Consequently, IKE_SA_INIT messages with reserved fields set were processed as erroneous messages by Openswan and the IKE_SA_INIT exchange failed. With this update, Openswan has been modified to ignore reserved fields as expected and IKE_SA_INIT exchanges succeed in this scenario. BZ# 813357 Previously, Openswan did not behave in accordance with the IKEv2 RFC5996 specification and ignored IKE_AUTH messages that contained an unrecognized Notify payload. This resulted in IKE SAs being set up successfully. With this update, Openswan processes any unrecognized Notify payload as an error and IKE SA setup fails as expected. BZ# 813360 When processing an INFORMATIONAL exchange, the responder previously did not send an INFORMATIONAL response message as expected in reaction to the INFORMATIONAL request message sent by the initiator. As a consequence, the INFORMATIONAL exchange failed. This update corrects Openswan so that the responder now sends an INFORMATIONAL response message after every INFORMATIONAL request message received, and the INFORMATIONAL exchange succeeds as expected in this scenario. BZ# 813362 When processing an INFORMATIONAL exchange with a Delete payload, the responder previously did not send an INFORMATIONAL response message as expected in reaction to the INFORMATIONAL request message sent by the initiator. As a consequence, the INFORMATIONAL exchange failed and the initiator did not delete IKE SAs. This updates corrects Openswan so that the responder now sends an INFORMATIONAL response message and the initiator deletes IKE SAs as expected in this scenario. BZ# 813364 When the responder received an INFORMATIONAL request with a Delete payload for a CHILD SA, Openswan did not process the request correctly and did not send the INFORMATIONAL response message to the initiator as expected according to the RFC5996 specification. Consequently, the responder was not aware of the request and only the initiator's CHILD SA was deleted. With this update, Openswan sends the response message as expected and the CHILD SA is deleted properly on both endpoints. BZ# 813366 Previously, Openswan did not respond to INFORMATIONAL requests with no payloads that are used for dead-peer detection. Consequently, the initiator considered the responder to be a dead peer and deleted the respective IKE SAs. This update modifies Openswan so that an empty INFORMATIONAL response message is now sent to the initiator as expected, and the initiator no longer incorrectly deletes IKE SAs in this scenario. BZ# 813372 When processing an INFORMATIONAL exchange and the RESERVED field of the INFORMATIONAL request or response messages was modified, Openswan did not ignore the field as expected according to the IKEv2 RFC5996 specification. Consequently, the INFORMATIONAL messages were processed as erroneous by Openswan, and the INFORMATIONAL exchange failed. With this update, Openswan has been modified to ignore reserved fields as expected and INFORMATIONAL exchanges succeed in this scenario. BZ# 813378 When the initiator received an INFORMATIONAL request with a Delete payload for an IKE SA, Openswan did not process the request correctly and did not send the INFORMATIONAL response message to the responder as expected according to the RFC5996 specification. Consequently, the initiator was not aware of the request and only the responder's IKE SA was deleted. With this update, Openswan sends the response message as expected and the IKE SA is deleted properly on both endpoints. BZ# 813379 IKEv2 requires each IKE message to have a sequence number for matching a request and response when re-transmitting the message during the IKE exchange. Previously, Openswan incremented sequence numbers incorrectly so that IKE messages were processed in the wrong order. As a consequence, any messages sent by the responder were not processed correctly and any subsequent exchange failed. This update modifies Openswan to increment sequence numbers in accordance with the RFC5996 specification so that IKE messages are matched correctly and exchanges succeed as expected in this scenario. BZ# 813565 Openswan did not ignore the minor version number of the IKE_SA_INIT request messages as required by the RFC5996 specification. Consequently, if the minor version number of the request was higher than the minor version number of the IKE protocol used by the receiving peer, Openswan processed the IKE_SA_INIT messages as erroneous and the IKE_SA_INIT exchange failed. With this update, Openswan has been modified to ignore the Minor Version fields of the IKE_SA_INIT requests as expected and the IKE_SA_INIT exchange succeeds in this scenario. BZ# 814600 Older versions of kernel required the output length of the HMAC hash function to be truncated to 96 bits therefore Openswan previously worked with 96-bit truncation length when using the HMAC-SHA2-256 algorithm. However, newer kernels require the 128-bit HMAC truncation length, which is as per the RFC4868 specification. Consequently, this difference could cause incompatible SAs to be set on IKE endpoints due to one endpoint using 96-bit and the other 128-bit output length of the hash function. This update modifies the underlying code so that Openswan now complies with RFC4868 and adds support for the new kernel configuration parameter, sha2_truncbug. If the "sha2_truncbug" parameter is set to "yes", Openswan now passes the correct key length to the kernel, which ensures interoperability between older and newer kernels. All users of openswan are advised to upgrade to these updated packages, which fix these bugs. 4.211.4. RHBA-2013:1160 - openswan bug fix update Updated openswan packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. Openswan is a free implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE). IPsec uses strong cryptography to provide both authentication and encryption services. These services allow you to build secure tunnels through untrusted networks. Bug Fix BZ# 983451 The openswan package for Internet Protocol Security (IPsec) contains two diagnostic commands, "ipsec barf" and "ipsec look", that can cause the iptables kernel modules for NAT and IP connection tracking to be loaded. On very busy systems, loading such kernel modules can result in severely degraded performance or lead to a crash when the kernel runs out of resources. With this update, the diagnostic commands do not cause loading of the NAT and IP connection tracking modules. This update does not affect systems that already use IP connection tracking or NAT as the iptables and ip6tables services will already have loaded these kernel modules. Users of openswan are advised to upgrade to these updated packages, which fix this bug.
[ "next payload type of ISAKMP Identification Payload has an unknown value" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/openswan
19.4. Known Issues
19.4. Known Issues SELinux MLS policy is not supported with kernel version 4.14 SELinux Multi-Level Security (MLS) Policy denies unknown classes and permissions, and kernel version 4.14 in the kernel-alt packages recognizes the map permission, which is not defined in any policy. Consequently, every command on a system with active MLS policy and SELinux in enforcing mode terminates with the Segmentation fault error. A lot of SELinux denial warnings occurs on systems with active MLS policy and SELinux in permissive mode. The combination of SELinux MLS policy with kernel version 4.14 is not supported. ipmitool communicates with BMC only when IPMI_SI=no on Cavium ThunderX When starting ipmi.service with the systemctl command, the default configuration attempts to load the ipmi_si driver. On Cavium ThunderX systems, which do not have an IPMI SI device, ipmi.service incorrectly removes the ipmi_devintf driver. Consequently, the ipmitool utility in the kernel is not able to communicate with the Baseboard Management Controller (BMC). To work around this problem, edit the /etc/sysconfig/ipmi file and set the IPMI_SI variable as follows: Then reboot the operating system if necessary. As a result, the correct drivers are loaded and ipmitoo can communicate with BMC through the /dev/ipmi0/ directory. (BZ#1448181) Putting SATA ALPM devices into low power mode does not work correctly When using the following commands to enable and disable low power mode for Serial Advanced Technology Attachment (SATA) devices using the Aggressive Link Power Management (ALPM) power management protocol on the 64-bit ARM systems, SATA does not work correctly: Consequently, SATA failures stop all disk I/O, and users have to reboot the operating system to fix it. To work around this problem, use one of the following options: Do not put the system into the powersave profile Check with your hardware vendor for firmware updates that might fix the bug with ALPM (BZ#1430391) Setting tuned to network-latency causes system hang on ARM If the tuned profile is set to network-latency on the 64-bit ARM systems, the operating system becomes unresponsive, and the kernel prints a backtrace to the serial console. To work around this problem, do not set the tuned profile to network-latency . (BZ#1503121) modprobe succeeds to load kernel modules with incorrect parameters When attempting to load a kernel module with an incorrect parameter using the modprobe command, the incorrect parameter is ignored, and the module loads as expected on Red Hat Enterprise Linux 7 for ARM and for IBM Power LE (POWER9). Note that this is a different behavior compared to Red Hat Enterprise Linux for traditional architectures, such as AMD64 and Intel 64, IBM Z and IBM Power Systems. On these systems, modprobe exits with an error, and the module with an incorrect parameter does not load in the described situation. On all architectures, an error message is recorded in the dmesg output. (BZ#1449439)
[ "IPMI_SI=no", "tuned-adm profile powersave", "tuned-adm profile performance" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/arm-known-issues
6.2. Creating an ext3 File System
6.2. Creating an ext3 File System After installation, it is sometimes necessary to create a new ext3 file system. For example, if you add a new disk drive to the system, you may want to partition the drive and use the ext3 file system. The steps for creating an ext3 file system are as follows: Create the partition using parted or fdisk . Format the partition with the ext3 file system using mkfs . Label the partition using e2label . Create the mount point. Add the partition to the /etc/fstab file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/the_ext3_file_system-creating_an_ext3_file_system
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator 3.1. Release notes 3.1.1. Custom Metrics Autoscaler Operator release notes The release notes for the Custom Metrics Autoscaler Operator for Red Hat OpenShift describe new features and enhancements, deprecated features, and known issues. The Custom Metrics Autoscaler Operator uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the Red Hat OpenShift Service on AWS horizontal pod autoscaler (HPA). Note The Custom Metrics Autoscaler Operator for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Service on AWS. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 3.1.1.1. Supported versions The following table defines the Custom Metrics Autoscaler Operator versions for each Red Hat OpenShift Service on AWS version. Version Red Hat OpenShift Service on AWS version General availability 2.14.1 4.16 General availability 2.14.1 4.15 General availability 2.14.1 4.14 General availability 2.14.1 4.13 General availability 2.14.1 4.12 General availability 3.1.1.2. Custom Metrics Autoscaler Operator 2.14.1-467 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-467 provides a CVE and a bug fix for running the Operator in an Red Hat OpenShift Service on AWS cluster. The following advisory is available for the RHSA-2024:7348 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.1.2.1. Bug fixes Previously, the root file system of the Custom Metrics Autoscaler Operator pod was writable, which is unnecessary and could present security issues. This update makes the pod root file system read-only, which addresses the potential security issue. ( OCPBUGS-37989 ) 3.1.2. Release notes for past releases of the Custom Metrics Autoscaler Operator The following release notes are for versions of the Custom Metrics Autoscaler Operator. For the current version, see Custom Metrics Autoscaler Operator release notes . 3.1.2.1. Custom Metrics Autoscaler Operator 2.14.1-454 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-454 provides a CVE, a new feature, and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The following advisory is available for the RHBA-2024:5865 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.1.1. New features and enhancements 3.1.2.1.1.1. Support for the Cron trigger with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use the Cron trigger to scale pods based on an hourly schedule. When your specified time frame starts, the Custom Metrics Autoscaler Operator scales pods to your desired amount. When the time frame ends, the Operator scales back down to the level. For more information, see Understanding the Cron trigger . 3.1.2.1.2. Bug fixes Previously, if you made changes to audit configuration parameters in the KedaController custom resource, the keda-metrics-server-audit-policy config map would not get updated. As a consequence, you could not change the audit configuration parameters after the initial deployment of the Custom Metrics Autoscaler. With this fix, changes to the audit configuration now render properly in the config map, allowing you to change the audit configuration any time after installation. ( OCPBUGS-32521 ) 3.1.2.2. Custom Metrics Autoscaler Operator 2.13.1 release notes This release of the Custom Metrics Autoscaler Operator 2.13.1-421 provides a new feature and a bug fix for running the Operator in an Red Hat OpenShift Service on AWS cluster. The following advisory is available for the RHBA-2024:4837 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.2.1. New features and enhancements 3.1.2.2.1.1. Support for custom certificates with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use custom service CA certificates to connect securely to TLS-enabled metrics sources, such as an external Kafka cluster or an external Prometheus service. By default, the Operator uses automatically-generated service certificates to connect to on-cluster services only. There is a new field in the KedaController object that allows you to load custom server CA certificates for connecting to external services by using config maps. For more information, see Custom CA certificates for the Custom Metrics Autoscaler . 3.1.2.2.2. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. Scaled objects containing cron triggers are currently not supported for the custom metrics autoscaler. ( OCPBUGS-34018 ) 3.1.2.3. Custom Metrics Autoscaler Operator 2.12.1-394 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-394 provides a bug fix for running the Operator in an Red Hat OpenShift Service on AWS cluster. The following advisory is available for the RHSA-2024:2901 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.3.1. Bug fixes Previously, the protojson.Unmarshal function entered into an infinite loop when unmarshaling certain forms of invalid JSON. This condition could occur when unmarshaling into a message that contains a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option is set. This release fixes this issue. ( OCPBUGS-30305 ) Previously, when parsing a multipart form, either explicitly with the Request.ParseMultipartForm method or implicitly with the Request.FormValue , Request.PostFormValue , or Request.FormFile method, the limits on the total size of the parsed form were not applied to the memory consumed. This could cause memory exhaustion. With this fix, the parsing process now correctly limits the maximum size of form lines while reading a single form line. ( OCPBUGS-30360 ) Previously, when following an HTTP redirect to a domain that is not on a matching subdomain or on an exact match of the initial domain, an HTTP client would not forward sensitive headers, such as Authorization or Cookie . For example, a redirect from example.com to www.example.com would forward the Authorization header, but a redirect to www.example.org would not forward the header. This release fixes this issue. ( OCPBUGS-30365 ) Previously, verifying a certificate chain that contains a certificate with an unknown public key algorithm caused the certificate verification process to panic. This condition affected all crypto and Transport Layer Security (TLS) clients and servers that set the Config.ClientAuth parameter to the VerifyClientCertIfGiven or RequireAndVerifyClientCert value. The default behavior is for TLS servers to not verify client certificates. This release fixes this issue. ( OCPBUGS-30370 ) Previously, if errors returned from the MarshalJSON method contained user-controlled data, an attacker could have used the data to break the contextual auto-escaping behavior of the HTML template package. This condition would allow for subsequent actions to inject unexpected content into the templates. This release fixes this issue. ( OCPBUGS-30397 ) Previously, the net/http and golang.org/x/net/http2 Go packages did not limit the number of CONTINUATION frames for an HTTP/2 request. This condition could result in excessive CPU consumption. This release fixes this issue. ( OCPBUGS-30894 ) 3.1.2.4. Custom Metrics Autoscaler Operator 2.12.1-384 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-384 provides a bug fix for running the Operator in an Red Hat OpenShift Service on AWS cluster. The following advisory is available for the RHBA-2024:2043 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.4.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-32395 ) 3.1.2.5. Custom Metrics Autoscaler Operator 2.12.1-376 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-376 provides security updates and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The following advisory is available for the RHSA-2024:1812 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.5.1. Bug fixes Previously, if invalid values such as nonexistent namespaces were specified in scaled object metadata, the underlying scaler clients would not free, or close, their client descriptors, resulting in a slow memory leak. This fix properly closes the underlying client descriptors when there are errors, preventing memory from leaking. ( OCPBUGS-30145 ) Previously the ServiceMonitor custom resource (CR) for the keda-metrics-apiserver pod was not functioning, because the CR referenced an incorrect metrics port name of http . This fix corrects the ServiceMonitor CR to reference the proper port name of metrics . As a result, the Service Monitor functions properly. ( OCPBUGS-25806 ) 3.1.2.6. Custom Metrics Autoscaler Operator 2.11.2-322 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-322 provides security updates and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The following advisory is available for the RHSA-2023:6144 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.6.1. Bug fixes Because the Custom Metrics Autoscaler Operator version 3.11.2-311 was released without a required volume mount in the Operator deployment, the Custom Metrics Autoscaler Operator pod would restart every 15 minutes. This fix adds the required volume mount to the Operator deployment. As a result, the Operator no longer restarts every 15 minutes. ( OCPBUGS-22361 ) 3.1.2.7. Custom Metrics Autoscaler Operator 2.11.2-311 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-311 provides new features and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The components of the Custom Metrics Autoscaler Operator 2.11.2-311 were released in RHBA-2023:5981 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.7.1. New features and enhancements 3.1.2.7.1.1. Red Hat OpenShift Service on AWS (ROSA) and OpenShift Dedicated are now supported The Custom Metrics Autoscaler Operator 2.11.2-311 can be installed on OpenShift ROSA and OpenShift Dedicated managed clusters. versions of the Custom Metrics Autoscaler Operator could be installed only in the openshift-keda namespace. This prevented the Operator from being installed on OpenShift ROSA and OpenShift Dedicated clusters. This version of Custom Metrics Autoscaler allows installation to other namespaces such as openshift-operators or keda , enabling installation into ROSA and Dedicated clusters. 3.1.2.7.2. Bug fixes Previously, if the Custom Metrics Autoscaler Operator was installed and configured, but not in use, the OpenShift CLI reported the couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 error after any oc command was entered. The message, although harmless, could have caused confusion. With this fix, the Got empty response for: external.metrics... error no longer appears inappropriately. ( OCPBUGS-15779 ) Previously, any annotation or label change to objects managed by the Custom Metrics Autoscaler were reverted by Custom Metrics Autoscaler Operator any time the Keda Controller was modified, for example after a configuration change. This caused continuous changing of labels in your objects. The Custom Metrics Autoscaler now uses its own annotation to manage labels and annotations, and annotation or label are no longer inappropriately reverted. ( OCPBUGS-15590 ) 3.1.2.8. Custom Metrics Autoscaler Operator 2.10.1-267 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1-267 provides new features and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1-267 were released in RHBA-2023:4089 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.8.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images did not contain time zone information. Because of this, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds now include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-15264 ) Previously, the Custom Metrics Autoscaler Operator would attempt to take ownership of all managed objects, including objects in other namespaces and cluster-scoped objects. Because of this, the Custom Metrics Autoscaler Operator was unable to create the role binding for reading the credentials necessary to be an API server. This caused errors in the kube-system namespace. With this fix, the Custom Metrics Autoscaler Operator skips adding the ownerReference field to any object in another namespace or any cluster-scoped object. As a result, the role binding is now created without any errors. ( OCPBUGS-15038 ) Previously, the Custom Metrics Autoscaler Operator added an ownerReferences field to the openshift-keda namespace. While this did not cause functionality problems, the presence of this field could have caused confusion for cluster administrators. With this fix, the Custom Metrics Autoscaler Operator does not add the ownerReference field to the openshift-keda namespace. As a result, the openshift-keda namespace no longer has a superfluous ownerReference field. ( OCPBUGS-15293 ) Previously, if you used a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter was set to none , the trigger would fail to scale. With this fix, the Custom Metrics Autoscaler for OpenShift now properly handles the none pod identity provider type. As a result, a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter sset to none now properly scales. ( OCPBUGS-15274 ) 3.1.2.9. Custom Metrics Autoscaler Operator 2.10.1 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1 provides new features and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1 were released in RHEA-2023:3199 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.9.1. New features and enhancements 3.1.2.9.1.1. Custom Metrics Autoscaler Operator general availability The Custom Metrics Autoscaler Operator is now generally available as of Custom Metrics Autoscaler Operator version 2.10.1. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.2.9.1.2. Performance metrics You can now use the Prometheus Query Language (PromQL) to query metrics on the Custom Metrics Autoscaler Operator. 3.1.2.9.1.3. Pausing the custom metrics autoscaling for scaled objects You can now pause the autoscaling of a scaled object, as needed, and resume autoscaling when ready. 3.1.2.9.1.4. Replica fall back for scaled objects You can now specify the number of replicas to fall back to if a scaled object fails to get metrics from the source. 3.1.2.9.1.5. Customizable HPA naming for scaled objects You can now specify a custom name for the horizontal pod autoscaler in scaled objects. 3.1.2.9.1.6. Activation and scaling thresholds Because the horizontal pod autoscaler (HPA) cannot scale to or from 0 replicas, the Custom Metrics Autoscaler Operator does that scaling, after which the HPA performs the scaling. You can now specify when the HPA takes over autoscaling, based on the number of replicas. This allows for more flexibility with your scaling policies. 3.1.2.10. Custom Metrics Autoscaler Operator 2.8.2-174 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2-174 provides new features and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2-174 were released in RHEA-2023:1683 . Important The Custom Metrics Autoscaler Operator version 2.8.2-174 is a Technology Preview feature. 3.1.2.10.1. New features and enhancements 3.1.2.10.1.1. Operator upgrade support You can now upgrade from a prior version of the Custom Metrics Autoscaler Operator. See "Changing the update channel for an Operator" in the "Additional resources" for information on upgrading an Operator. 3.1.2.10.1.2. must-gather support You can now collect data about the Custom Metrics Autoscaler Operator and its components by using the Red Hat OpenShift Service on AWS must-gather tool. Currently, the process for using the must-gather tool with the Custom Metrics Autoscaler is different than for other operators. See "Gathering debugging data in the "Additional resources" for more information. 3.1.2.11. Custom Metrics Autoscaler Operator 2.8.2 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2 provides new features and bug fixes for running the Operator in an Red Hat OpenShift Service on AWS cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2 were released in RHSA-2023:1042 . Important The Custom Metrics Autoscaler Operator version 2.8.2 is a Technology Preview feature. 3.1.2.11.1. New features and enhancements 3.1.2.11.1.1. Audit Logging You can now gather and view audit logs for the Custom Metrics Autoscaler Operator and its associated components. Audit logs are security-relevant chronological sets of records that document the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 3.1.2.11.1.2. Scale applications based on Apache Kafka metrics You can now use the KEDA Apache kafka trigger/scaler to scale deployments based on an Apache Kafka topic. 3.1.2.11.1.3. Scale applications based on CPU metrics You can now use the KEDA CPU trigger/scaler to scale deployments based on CPU metrics. 3.1.2.11.1.4. Scale applications based on memory metrics You can now use the KEDA memory trigger/scaler to scale deployments based on memory metrics. 3.2. Custom Metrics Autoscaler Operator overview As a developer, you can use Custom Metrics Autoscaler Operator for Red Hat OpenShift to specify how Red Hat OpenShift Service on AWS should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. The Custom Metrics Autoscaler Operator is an optional Operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics. The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka metrics. The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers , also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that Red Hat OpenShift Service on AWS can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object for a workload, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed. Note You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload. The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the minReplicaCount value in the custom metrics autoscaler CR to 0 , the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the activation phase . After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the scaling phase . Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with activation . For example, if the threshold parameter configures scaling, activationThreshold would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you can configure a higher activation phase to prevent scaling up or down if the metric is particularly low. The activation value has more priority than the scaling value in case of different decisions for each. For example, if the threshold is set to 10 , and the activationThreshold is 50 , if the metric reports 40 , the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances. Figure 3.1. Custom metrics autoscaler workflow You create or modify a scaled object custom resource for a workload on a cluster. The object contains the scaling configuration for that workload. Prior to accepting the new object, the OpenShift API server sends it to the custom metrics autoscaler admission webhooks process to ensure that the object is valid. If validation succeeds, the API server persists the object. The custom metrics autoscaler controller watches for new or modified scaled objects. When the OpenShift API server notifies the controller of a change, the controller monitors any external trigger sources, also known as data sources, that are specified in the object for changes to the metrics data. One or more scalers request scaling data from the external trigger source. For example, for a Kafka trigger type, the controller uses the Kafka scaler to communicate with a Kafka instance to obtain the data requested by the trigger. The controller creates a horizontal pod autoscaler object for the scaled object. As a result, the Horizontal Pod Autoscaler (HPA) Operator starts monitoring the scaling data associated with the trigger. The HPA requests scaling data from the cluster OpenShift API server endpoint. The OpenShift API server endpoint is served by the custom metrics autoscaler metrics adapter. When the metrics adapter receives a request for custom metrics, it uses a GRPC connection to the controller to request it for the most recent trigger data received from the scaler. The HPA makes scaling decisions based upon the data received from the metrics adapter and scales the workload up or down by increasing or decreasing the replicas. As a it operates, a workload can affect the scaling metrics. For example, if a workload is scaled up to handle work in a Kafka queue, the queue size decreases after the workload processes all the work. As a result, the workload is scaled down. If the metrics are in a range specified by the minReplicaCount value, the custom metrics autoscaler controller disables all scaling, and leaves the replica count at a fixed level. If the metrics exceed that range, the custom metrics autoscaler controller enables scaling and allows the HPA to scale the workload. While scaling is disabled, the HPA does not take any action. 3.2.1. Custom CA certificates for the Custom Metrics Autoscaler By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services. If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the KedaController custom resource as described in Installing the custom metrics autoscaler . The Operator loads those certificates on start-up and registers them as trusted by the Operator. The config maps can contain one or more certificate files that contain one or more PEM-encoded CA certificates. Or, you can use separate config maps for each certificate file. Note If you later update the config map to add additional certificates, you must restart the keda-operator-* pod for the changes to take effect. 3.3. Installing the custom metrics autoscaler You can use the Red Hat OpenShift Service on AWS web console to install the Custom Metrics Autoscaler Operator. The installation creates the following five CRDs: ClusterTriggerAuthentication KedaController ScaledJob ScaledObject TriggerAuthentication 3.3.1. Installing the custom metrics autoscaler You can use the following procedure to install the Custom Metrics Autoscaler Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Remove any previously-installed Technology Preview versions of the Cluster Metrics Autoscaler Operator. Remove any versions of the community-based KEDA. Also, remove the KEDA 1.x custom resource definitions by running the following commands: USD oc delete crd scaledobjects.keda.k8s.io USD oc delete crd triggerauthentications.keda.k8s.io Ensure that the keda namespace exists. If not, you must manaully create the keda namespace. Optional: If you need the Custom Metrics Autoscaler Operator to connect to off-cluster services, such as an external Kafka cluster or an external Prometheus service, put any required service CA certificates into a config map. The config map must exist in the same namespace where the Operator is installed. For example: USD oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem Procedure In the Red Hat OpenShift Service on AWS web console, click Operators OperatorHub . Choose Custom Metrics Autoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the A specific namespace on the cluster option is selected for Installation Mode . For Installed Namespace , click Select a namespace . Click Select Project : If the keda namespace exists, select keda from the list. If the keda namespace does not exist: Select Create Project to open the Create Project window. In the Name field, enter keda . In the Display Name field, enter a descriptive name, such as keda . Optional: In the Display Name field, add a description for the namespace. Click Create . Click Install . Verify the installation by listing the Custom Metrics Autoscaler Operator components: Navigate to Workloads Pods . Select the keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running. Navigate to Workloads Deployments to verify that the custom-metrics-autoscaler-operator deployment is running. Optional: Verify the installation in the OpenShift CLI using the following command: USD oc get all -n keda The output appears similar to the following: Example output NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m Install the KedaController custom resource, which creates the required CRDs: In the Red Hat OpenShift Service on AWS web console, click Operators Installed Operators . Click Custom Metrics Autoscaler . On the Operator Details page, click the KedaController tab. On the KedaController tab, click Create KedaController and edit the file. kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: ["RequestReceived"] omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" serviceAccount: {} 1 Specifies a single namespace in which the Custom Metrics Autoscaler Operator should scale applications. Leave it blank or leave it empty to scale applications in all namespaces. This field should have a namespace or be empty. The default value is empty. 2 Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug , info , error . The default is info . 3 Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json . The default is console . 4 Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. 5 Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 for debug . The default is 0 . 6 Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. Click Create to create the KEDA controller. 3.4. Understanding custom metrics autoscaler triggers Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods. The custom metrics autoscaler currently supports the Prometheus, CPU, memory, Apache Kafka, and cron triggers. You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow. You can configure a certificate authority to use with your scaled objects or for all scalers in the cluster . 3.4.1. Understanding the Prometheus trigger You can scale pods based on Prometheus metrics, which can use the installed Red Hat OpenShift Service on AWS monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use Red Hat OpenShift Service on AWS monitoring" for information on the configurations required to use the Red Hat OpenShift Service on AWS monitoring as a source for metrics. Note If Prometheus is collecting metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on. Example scaled object with a Prometheus target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: # ... triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job="test-app"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: "false" 9 unsafeSsl: "false" 10 1 Specifies Prometheus as the trigger type. 2 Specifies the address of the Prometheus server. This example uses Red Hat OpenShift Service on AWS monitoring. 3 Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if using Red Hat OpenShift Service on AWS monitoring as a source for the metrics. 4 Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique. 5 Specifies the value that triggers scaling. Must be specified as a quoted string value. 6 Specifies the Prometheus query to use. 7 Specifies the authentication method to use. Prometheus scalers support bearer authentication ( bearer ), basic authentication ( basic ), or TLS authentication ( tls ). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret. 8 Optional: Passes the X-Scope-OrgID header to multi-tenant Cortex or Mimir storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return. 9 Optional: Specifies how the trigger should proceed if the Prometheus target is lost. If true , the trigger continues to operate if the Prometheus target is lost. This is the default behavior. If false , the trigger returns an error if the Prometheus target is lost. 10 Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint. If false , the certificate check is performed. This is the default behavior. If true , the certificate check is not performed. Important Skipping the check is not recommended. 3.4.1.1. Configuring the custom metrics autoscaler to use Red Hat OpenShift Service on AWS monitoring You can use the installed Red Hat OpenShift Service on AWS Prometheus monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform. For your scaled objects to be able to read the Red Hat OpenShift Service on AWS Prometheus metrics, you must use a trigger authentication or a cluster trigger authentication in order to provide the authentication information required. The following procedure differs depending on which trigger authentication method you use. For more information on trigger authentications, see "Understanding custom metrics autoscaler trigger authentications". Note These steps are not required for an external Prometheus source. You must perform the following tasks, as described in this section: Create a service account. Create a secret that generates a token for the service account. Create the trigger authentication. Create a role. Add that role to the service account. Reference the token in the trigger authentication object used by Prometheus. Prerequisites Red Hat OpenShift Service on AWS monitoring must be installed. Monitoring of user-defined workloads must be enabled in Red Hat OpenShift Service on AWS monitoring, as described in the Creating a user-defined workload monitoring config map section. The Custom Metrics Autoscaler Operator must be installed. Procedure Change to the appropriate project: USD oc project <project_name> 1 1 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. Create a service account and token, if your cluster does not have one: Create a service account object by using the following command: USD oc create serviceaccount thanos 1 1 Specifies the name of the service account. Create a secret YAML to generate a service account token: apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token 1 Specifies the name of the service account. Create the secret object by using the following command: USD oc create -f <file_name>.yaml Use the following command to locate the token assigned to the service account: USD oc describe serviceaccount thanos 1 1 Specifies the name of the service account. Example output Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none> 1 Use this token in the trigger authentication. Create a trigger authentication with the service account token: Create a YAML file similar to the following: apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt 1 Specifies one of the following trigger authentication methods: If you are using a trigger authentication, specify TriggerAuthentication . This example configures a trigger authentication. If you are using a cluster trigger authentication, specify ClusterTriggerAuthentication . 2 Specifies that this object uses a secret for authorization. 3 Specifies the authentication parameter to supply by using the token. 4 Specifies the name of the token to use. 5 Specifies the key in the token to use with the specified parameter. Create the CR object: USD oc create -f <file-name>.yaml Create a role for reading Thanos metrics: Create a YAML file with the following parameters: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch Create the CR object: USD oc create -f <file-name>.yaml Create a role binding for reading Thanos metrics: Create a YAML file similar to the following: apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5 1 Specifies one of the following object types: If you are using a trigger authentication, specify RoleBinding . If you are using a cluster trigger authentication, specify ClusterRoleBinding . 2 Specifies the name of the role you created. 3 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. 4 Specifies the name of the service account to bind to the role. 5 Specifies the project where you previously created the service account. Create the CR object: USD oc create -f <file-name>.yaml You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in "Understanding how to add custom metrics autoscalers". To use Red Hat OpenShift Service on AWS monitoring as the source, in the trigger, or scaler, you must include the following parameters: triggers.type must be prometheus triggers.metadata.serverAddress must be https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 triggers.metadata.authModes must be bearer triggers.metadata.namespace must be set to the namespace of the object to scale triggers.authenticationRef must point to the trigger authentication resource specified in the step Additional resources Understanding custom metrics autoscaler trigger authentications 3.4.2. Understanding the CPU trigger You can scale pods based on CPU metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the CPU usage that you specify. The autoscaler increases or decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. The memory trigger considers the memory utilization of the entire pod. If the pod has multiple containers, the memory trigger considers the total memory utilization of all containers in the pod. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a CPU target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: # ... triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4 1 Specifies CPU as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Specifies the minimum number of replicas when scaling down. For a CPU trigger, enter a value of 1 or greater, because the HPA cannot scale to zero if you are using only CPU metrics. 3.4.3. Understanding the memory trigger You can scale pods based on memory metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the average memory usage that you specify. The autoscaler increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. The memory trigger considers the memory utilization of entire pod. If the pod has multiple containers, the memory utilization is the sum of all of the containers. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a memory target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: # ... triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4 1 Specifies memory as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Optional: Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. In this example, only the container named api is to be scaled. 3.4.4. Understanding the Kafka trigger You can scale pods based on an Apache Kafka topic or other services that support the Kafka protocol. The custom metrics autoscaler does not scale higher than the number of Kafka partitions, unless you set the allowIdleConsumers parameter to true in the scaled object or scaled job. Note If the number of consumer groups exceeds the number of partitions in a topic, the extra consumer groups remain idle. To avoid this, by default the number of replicas does not exceed: The number of partitions on a topic, if a topic is specified The number of partitions of all topics in the consumer group, if no topic is specified The maxReplicaCount specified in scaled object or scaled job CR You can use the allowIdleConsumers parameter to disable these default behaviors. Example scaled object with a Kafka target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: # ... triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13 1 Specifies Kafka as the trigger type. 2 Specifies the name of the Kafka topic on which Kafka is processing the offset lag. 3 Specifies a comma-separated list of Kafka brokers to connect to. 4 Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag. 5 Optional: Specifies the average target value that triggers scaling. Must be specified as a quoted string value. The default is 5 . 6 Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. 7 Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: latest and earliest . The default is latest . 8 Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic. If true , the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers. If false , the number of Kafka replicas cannot exceed the number of partitions on a topic. This is the default. 9 Specifies how the trigger behaves when a Kafka partition does not have a valid offset. If true , the consumers are scaled to zero for that partition. If false , the scaler keeps a single consumer for that partition. This is the default. 10 Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the polling cycle. If true , the scaler excludes partition lag in these partitions. If false , the trigger includes all consumer lag in all partitions. This is the default. 11 Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is 1.0.0 . 12 Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions. 13 Optional: Specifies whether to use TSL client authentication for Kafka. The default is disable . For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications". 3.4.5. Understanding the Cron trigger You can scale pods based on a time range. When the time range starts, the custom metrics autoscaler scales the pods associated with an object from the configured minimum number of pods to the specified number of desired pods. At the end of the time range, the pods are scaled back to the configured minimum. The time period must be configured in cron format . The following example scales the pods associated with this scaled object from 0 to 100 from 6:00 AM to 6:30 PM India Standard Time. Example scaled object with a Cron trigger apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: "0 6 * * *" 5 end: "30 18 * * *" 6 desiredReplicas: "100" 7 1 Specifies the minimum number of pods to scale down to at the end of the time frame. 2 Specifies the maximum number of replicas when scaling up. This value should be the same as desiredReplicas . The default is 100 . 3 Specifies a Cron trigger. 4 Specifies the timezone for the time frame. This value must be from the IANA Time Zone Database . 5 Specifies the start of the time frame. 6 Specifies the end of the time frame. 7 Specifies the number of pods to scale to between the start and end of the time frame. This value should be the same as maxReplicaCount . 3.5. Understanding custom metrics autoscaler trigger authentications A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass Red Hat OpenShift Service on AWS secrets, platform-native pod authentication mechanisms, environment variables, and so on. You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace. Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces. Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object. Example secret for Basic authentication apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: "dXNlcm5hbWU=" 1 password: "cGFzc3dvcmQ=" 1 User name and password to supply to the trigger authentication. The values in a data stanza must be base-64 encoded. Example trigger authentication using a secret for Basic authentication kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example cluster trigger authentication with a secret for Basic authentication kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Note that no namespace is used with a cluster trigger authentication. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example secret with certificate authority (CA) details apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t... 1 Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded. 2 Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded. Example trigger authentication using a secret for CA details kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. 6 Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint. 7 Specifies the name of the secret to use. 8 Specifies the key in the secret to use with the specified parameter. Example secret with a bearer token apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" 1 1 Specifies a bearer token to use with bearer authentication. The value in a data stanza must be base-64 encoded. Example trigger authentication with a bearer token kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the token to use with the specified parameter. Example trigger authentication with an environment variable kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint. 3 Specify the parameter to set with this variable. 4 Specify the name of the environment variable. 5 Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object. Example trigger authentication with pod authentication providers kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint. 3 Specifies a pod identity. Supported values are none , azure , gcp , aws-eks , or aws-kiam . The default is none . Additional resources For information about Red Hat OpenShift Service on AWS secrets, see Providing sensitive data to pods . 3.5.1. Using trigger authentications You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you are using a secret, the Secret object must exist, for example: Example secret apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD> Procedure Create the TriggerAuthentication or ClusterTriggerAuthentication object. Create a YAML file that defines the object: Example trigger authentication with a secret kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD Create the TriggerAuthentication object: USD oc create -f <filename>.yaml Create or edit a ScaledObject YAML file that uses the trigger authentication: Create a YAML file that defines the object by running the following command: Example scaled object with a trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify TriggerAuthentication . TriggerAuthentication is the default. Example scaled object with a cluster trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify ClusterTriggerAuthentication . Create the scaled object by running the following command: USD oc apply -f <filename> 3.6. Pausing the custom metrics autoscaler for a scaled object You can pause and restart the autoscaling of a workload, as needed. For example, you might want to pause autoscaling before performing cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. 3.6.1. Pausing a custom metrics autoscaler You can pause the autoscaling of a scaled object by adding the autoscaling.keda.sh/paused-replicas annotation to the custom metrics autoscaler for that scaled object. The custom metrics autoscaler scales the replicas for that workload to the specified value and pauses autoscaling until the annotation is removed. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Add the autoscaling.keda.sh/paused-replicas annotation with any value: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling. 3.6.2. Restarting the custom metrics autoscaler for a scaled object You can restart a paused custom metrics autoscaler by removing the autoscaling.keda.sh/paused-replicas annotation for that ScaledObject . apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Remove the autoscaling.keda.sh/paused-replicas annotation. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Remove this annotation to restart a paused custom metrics autoscaler. 3.7. Gathering audit logs You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. For example, audit logs can help you understand where an autoscaling request is coming from. This is key information when backends are getting overloaded by autoscaling requests made by user applications and you need to determine which is the troublesome application. 3.7.1. Configuring audit logging You can configure auditing for the Custom Metrics Autoscaler Operator by editing the KedaController custom resource. The logs are sent to an audit log file on a volume that is secured by using a persistent volume claim in the KedaController CR. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Edit the KedaController custom resource to add the auditConfig stanza: kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: # ... metricsServer: # ... auditConfig: logFormat: "json" 1 logOutputVolumeClaim: "pvc-audit-log" 2 policy: rules: 3 - level: Metadata omitStages: "RequestReceived" 4 omitManagedFields: false 5 lifetime: 6 maxAge: "2" maxBackup: "1" maxSize: "50" 1 Specifies the output format of the audit log, either legacy or json . 2 Specifies an existing persistent volume claim for storing the log data. All requests coming to the API server are logged to this persistent volume claim. If you leave this field empty, the log data is sent to stdout. 3 Specifies which events should be recorded and what data they should include: None : Do not log events. Metadata : Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. This is the default. Request : Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. RequestResponse : Log event metadata, request text, and response text. This option does not apply for non-resource requests. 4 Specifies stages for which no event is created. 5 Specifies whether to omit the managed fields of the request and response bodies from being written to the API audit log, either true to omit the fields or false to include the fields. 6 Specifies the size and lifespan of the audit logs. maxAge : The maximum number of days to retain audit log files, based on the timestamp encoded in their filename. maxBackup : The maximum number of audit log files to retain. Set to 0 to retain all audit log files. maxSize : The maximum size in megabytes of an audit log file before it gets rotated. Verification View the audit log file directly: Obtain the name of the keda-metrics-apiserver-* pod: oc get pod -n keda Example output NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s View the log data by using a command similar to the following: USD oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: USD oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata Example output ... {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"4c81d41b-3dab-4675-90ce-20b87ce24013","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.131.0.1"],"userAgent":"kube-probe/1.28","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-02-16T13:00:03.554567Z","stageTimestamp":"2023-02-16T13:00:03.555032Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} ... Alternatively, you can view a specific log: Use a command similar to the following to log into the keda-metrics-apiserver-* pod: USD oc rsh pod/keda-metrics-apiserver-<hash> -n keda For example: USD oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n keda Change to the /var/audit-policy/ directory: sh-4.4USD cd /var/audit-policy/ List the available logs: sh-4.4USD ls Example output log-2023.02.17-14:50 policy.yaml View the log, as needed: sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request Example output 3.8. Gathering debugging data When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. To help troubleshoot your issue, provide the following information: Data gathered using the must-gather tool. The unique cluster ID. You can use the must-gather tool to collect data about the Custom Metrics Autoscaler Operator and its components, including the following items: The keda namespace and its child objects. The Custom Metric Autoscaler Operator installation objects. The Custom Metric Autoscaler Operator CRD objects. 3.8.1. Gathering debugging data The following command runs the must-gather tool for the Custom Metrics Autoscaler Operator: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Note The standard Red Hat OpenShift Service on AWS must-gather command, oc adm must-gather , does not collect Custom Metrics Autoscaler Operator data. Prerequisites You are logged in to Red Hat OpenShift Service on AWS as a user with the dedicated-admin role. The Red Hat OpenShift Service on AWS CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Perform one of the following: To get only the Custom Metrics Autoscaler Operator must-gather data, use the following command: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" The custom image for the must-gather command is pulled directly from the Operator package manifests, so that it works on any cluster where the Custom Metric Autoscaler Operator is available. To gather the default must-gather data in addition to the Custom Metric Autoscaler Operator information: Use the following command to obtain the Custom Metrics Autoscaler Operator image and set it as an environment variable: USD IMAGE="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Use the oc adm must-gather with the Custom Metrics Autoscaler Operator image: USD oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE} Example 3.1. Example must-gather output for the Custom Metric Autoscaler └── keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── .insecure.log │ │ └── .log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . 3.9. Viewing Operator metrics The Custom Metrics Autoscaler Operator exposes ready-to-use metrics that it pulls from the on-cluster monitoring component. You can query the metrics by using the Prometheus Query Language (PromQL) to analyze and diagnose issues. All metrics are reset when the controller pod restarts. 3.9.1. Accessing performance metrics You can access the metrics and run queries by using the Red Hat OpenShift Service on AWS web console. Procedure Select the Administrator perspective in the Red Hat OpenShift Service on AWS web console. Select Observe Metrics . To create a custom query, add your PromQL query to the Expression field. To add multiple queries, select Add Query . 3.9.1.1. Provided Operator metrics The Custom Metrics Autoscaler Operator exposes the following metrics, which you can view by using the Red Hat OpenShift Service on AWS web console. Table 3.1. Custom Metric Autoscaler Operator metrics Metric name Description keda_scaler_activity Whether the particular scaler is active or inactive. A value of 1 indicates the scaler is active; a value of 0 indicates the scaler is inactive. keda_scaler_metrics_value The current value for each scaler's metric, which is used by the Horizontal Pod Autoscaler (HPA) in computing the target average. keda_scaler_metrics_latency The latency of retrieving the current metric from each scaler. keda_scaler_errors The number of errors that have occurred for each scaler. keda_scaler_errors_total The total number of errors encountered for all scalers. keda_scaled_object_errors The number of errors that have occurred for each scaled obejct. keda_resource_totals The total number of Custom Metrics Autoscaler custom resources in each namespace for each custom resource type. keda_trigger_totals The total number of triggers by trigger type. Custom Metrics Autoscaler Admission webhook metrics The Custom Metrics Autoscaler Admission webhook also exposes the following Prometheus metrics. Metric name Description keda_scaled_object_validation_total The number of scaled object validations. keda_scaled_object_validation_errors The number of validation errors. 3.10. Understanding how to add custom metrics autoscalers To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job. You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload. 3.10.1. Adding a custom metrics autoscaler to a workload You can create a custom metrics autoscaler for a workload that is created by a Deployment , StatefulSet , or custom resource object. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you use a custom metrics autoscaler for scaling based on CPU or memory: Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage. USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> The pods associated with the object you want to scale must include specified memory and CPU limits. For example: Example pod spec apiVersion: v1 kind: Pod # ... spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: "128Mi" cpu: "500m" # ... Procedure Create a YAML file similar to the following. Only the name <2> , object name <4> , and object kind <5> are required: Example scaled object apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "0" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: "RequestReceived" omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication 1 Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the "Pausing the custom metrics autoscaler for a workload" section. 2 Specifies a name for this custom metrics autoscaler. 3 Optional: Specifies the API version of the target resource. The default is apps/v1 . 4 Specifies the name of the object that you want to scale. 5 Specifies the kind as Deployment , StatefulSet or CustomResource . 6 Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 7 Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0 . The default is 300 . 8 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 9 Optional: Specifies the minimum number of replicas when scaling down. 10 Optional: Specifies the parameters for audit logs. as described in the "Configuring audit logging" section. 11 Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation . 12 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 13 Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false , which keeps the replica count as it is when the scaled object is deleted. 14 Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name} . 15 Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the "Scaling policies" section. 16 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. This example uses Red Hat OpenShift Service on AWS monitoring. 17 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledobject <scaled_object_name> Example output NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. FALLBACK : Indicates whether the custom metrics autoscaler is able to get metrics from the source If False , the custom metrics autoscaler is getting metrics. If True , the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created. 3.10.2. Additional resources Understanding custom metrics autoscaler trigger authentications 3.11. Removing the Custom Metrics Autoscaler Operator You can remove the custom metrics autoscaler from your Red Hat OpenShift Service on AWS cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues. Note Delete the KedaController custom resource (CR) first. If you do not delete the KedaController CR, Red Hat OpenShift Service on AWS can hang when you delete the keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR. 3.11.1. Uninstalling the Custom Metrics Autoscaler Operator Use the following procedure to remove the custom metrics autoscaler from your Red Hat OpenShift Service on AWS cluster. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure In the Red Hat OpenShift Service on AWS web console, click Operators Installed Operators . Switch to the keda project. Remove the KedaController custom resource. Find the CustomMetricsAutoscaler Operator and click the KedaController tab. Find the custom resource, and then click Delete KedaController . Click Uninstall . Remove the Custom Metrics Autoscaler Operator: Click Operators Installed Operators . Find the CustomMetricsAutoscaler Operator and click the Options menu and select Uninstall Operator . Click Uninstall . Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components: Delete the custom metrics autoscaler CRDs: clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh USD oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted. List any custom metrics autoscaler cluster roles: USD oc get clusterrole | grep keda.sh Delete the listed custom metrics autoscaler cluster roles. For example: USD oc delete clusterrole.keda.sh-v1alpha1-admin List any custom metrics autoscaler cluster role bindings: USD oc get clusterrolebinding | grep keda.sh Delete the listed custom metrics autoscaler cluster role bindings. For example: USD oc delete clusterrolebinding.keda.sh-v1alpha1-admin Delete the custom metrics autoscaler project: USD oc delete project keda Delete the Cluster Metric Autoscaler Operator: USD oc delete operator/openshift-custom-metrics-autoscaler-operator.keda
[ "oc delete crd scaledobjects.keda.k8s.io", "oc delete crd triggerauthentications.keda.k8s.io", "oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem", "oc get all -n keda", "NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10", "oc project <project_name> 1", "oc create serviceaccount thanos 1", "apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token", "oc create -f <file_name>.yaml", "oc describe serviceaccount thanos 1", "Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>", "apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5", "oc create -f <file-name>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7", "apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3", "apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD", "oc create -f <filename>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2", "oc apply -f <filename>", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"", "get pod -n keda", "NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s", "oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1", "oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc rsh pod/keda-metrics-apiserver-<hash> -n keda", "oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n keda", "sh-4.4USD cd /var/audit-policy/", "sh-4.4USD ls", "log-2023.02.17-14:50 policy.yaml", "sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1", "sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}", "└── keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication", "oc create -f <filename>.yaml", "oc get scaledobject <scaled_object_name>", "NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s", "oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh", "oc get clusterrole | grep keda.sh", "oc delete clusterrole.keda.sh-v1alpha1-admin", "oc get clusterrolebinding | grep keda.sh", "oc delete clusterrolebinding.keda.sh-v1alpha1-admin", "oc delete project keda", "oc delete operator/openshift-custom-metrics-autoscaler-operator.keda" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator
Chapter 1. Introduction to Red Hat Cloud Instance policies
Chapter 1. Introduction to Red Hat Cloud Instance policies Use this guide to understand the technical and operational certification requirements for CCSP Partners who want to offer their own virtual and/or physical hardware with Red Hat products both on-demand or as-a-service to our joint Customers. 1.1. Audience As a Certified Cloud and Service Provider (CCSP) Partner who want to learn about the requirements of offering their own hardware. The Red Hat Cloud Instance Type certification presumes an advanced level of hardware and Red Hat product knowledge and skills. Important Red Hat product support is neither offered nor covered in the Red Hat Cloud Instance Type certification, but may be provided as part of your CCSP program membership or separately. 1.2. Create value for our joint customers The certification process includes a series of tests that provide your Red Hat customers: A consistent experience across cloud providers, A consistent experience across instances with similar features, and An assurance that the Instance Type is tested, verified and supported 1.3. Overview of program and product The Red Hat Cloud Instance Type Certification provides a formal means for you to work with Red Hat to establish an official support for your hardware. An Instance Type is required to include a set of compute, network, management, and storage features to establish a functioning system environment. You can select your specific feature requirements for an Instance Type according to the intended application or use cases at your own discretion. An Instance Type may also include one or more sizes if the features provided by the Instance are available in a variety of capacities such as, faster and slower networking, smaller and larger storage, more and less logical cores. During the certification process, Red Hat engineers follow the process described in Overview of test plan to create a test plan suitable for your Instance Type specification. The test plan defines the testing criteria required to achieve certification for the overall Instance Type including the sizes. 1.4. Certification prerequisites An active membership in the CCSP Program. If you are not already a member, visit Red Hat Connect for Business Partners to learn more and become a member. a support relationship with Red Hat. This can be fulfilled through: a custom support agreement or TSANet working knowledge of Red Hat Enterprise Linux (RHEL). Use of already certified or enabled virtual and physical hardware as presented to the Red Hat product. Certifications are currently available for: RHEL 8 and RHEL 9 versions. For the x86_64 Intel and AMD, ARM (by invitation), Power 8 & 9 Little Endian, and System Z platforms. Additional resources For more information about Technical Support Alliance Network, see TSANet web page . 1.5. Give feedback and get help We Need Feedback! If you find a typographical error in this guide, or if you think of a way to improve the certification program or documentation, we would appreciate hearing from you! Submit a report in the Red Hat Bugzilla system against the product Red Hat Certification Program . Fill in the Description field with your suggestion for improvement, try to be as specific as possible when describing it. If you find an error in the documentation include a link to the relevant part(s) of the documentation. Do You Need Help? If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal . Through the customer portal, you can: Search or browse through technical support articles and solutions about Red Hat products Submit a support case to Red Hat Global Support Services (GSS) Access product documentation The Red Hat Cloud Instance Type Certification Workflow Guide assists you with the steps on Opening a Support Case. Questions During Certification During the certification process, you may need to ask or reply to a question about topics which affect a specific certification. These questions and responses of the certification entry are recorded in the Dialog Tab > Additional Comments . Important Personal emails are not a tracked support mechanism and do not include a Service Level Agreement. Please use the correct support procedure when asking questions.
null
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_policy_guide/assembly_introduction-cloud-instance-policy_instance-policy-guide
Chapter 18. Upgrading an overcloud on a Red Hat OpenShift Container Platform cluster with director Operator (16.2 to 17.1)
Chapter 18. Upgrading an overcloud on a Red Hat OpenShift Container Platform cluster with director Operator (16.2 to 17.1) You can upgrade your Red Hat OpenStack Platform (RHOSP) 16.2 overcloud to a RHOSP 17.1 overcloud with director Operator (OSPdO) by using the in-place framework for upgrades (FFU) workflow. To perform an upgrade, you must perform the following tasks: Prepare your environment for the upgrade. Update custom roles_data files to the composable services supported by RHOSP 17.1. Optional: Upgrade Red Hat Ceph Storage and adopt cephadm . Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8. Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9. Perform post-upgrade tasks. 18.1. Prerequisites You are using the latest version of OSPdO. The overcloud nodes are up-to-date with the latest RHOSP 16.2 version. For information about how to perform a minor update, see Performing a minor update of the RHOSP overcloud with director Operator . The overcloud nodes are running the latest RHEL 8 kernel. 18.2. Updating director Operator You must update your director Operator (OSPdO) to the latest 17.1 version before performing the overcloud upgrade. To update OSPdO, you must first delete and reinstall the current OSPdO. To delete OSPdO, you delete the OSPdO subscription and CSV. Procedure Check the current version of the director Operator in the currentCSV field: Delete the CSV for the director Operator in the target namespace: Replace <current_CSV> with the currentCSV value from step 1. Delete the subscription: Install the latest 17.1 director Operator. For information, see Installing director Operator . 18.3. Preparing your director Operator environment for upgrade You must prepare your director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment for the upgrade to RHOSP 17.1. Procedure Set openStackRelease to 17.1 on the openstackcontrolplane CR: Retrieve the OSPdO ClusterServiceVersion ( csv ) CR: Delete all instances of the OpenStackConfigGenerator CR: If your deployment includes HCI, the adoption from ceph-ansible to cephadm must be performed using the RHOSP 17.1 on RHEL8 openstackclient image: If your deployment does not include HCI, or the cephadm adoption has already been completed, then switch to the 17.1 OSPdO default openstackclient image by removing the current imageURL from the openstackclient CR: If you have enabled fencing in the overcloud, you must temporarily disable fencing on one of the Controller nodes for the duration of the upgrade: 18.4. Updating composable services in custom roles_data files You must update your roles_data files to the supported Red Hat OpenStack Platform (RHOSP) 17.1 composable services. For more information, see Updating composable services in custom roles_data files in the Framework for Upgrades (16.2 to 17.1) guide. Procedure Remove the following services from all roles: Add the OS::TripleO::Services::GlanceApiInternal service to your Controller role. Update the OS::TripleO::Services::NovaLibvirt service on the Compute roles to OS::TripleO::Services::NovaLibvirtLegacy . If your environment includes Red Hat Ceph Storage, set the DeployedCeph parameter to false to enable director-managed cephadm deployments. If your network configuration templates include the following functions, you must manually convert your NIC templates to Jinja2 Ansible format before you upgrade the overcloud. The following functions are not supported with automatic conversion: For more information about manually converting your NIC templates, see Manually converting NIC templates to Jinja2 Ansible format in Installing and managing Red Hat OpenStack Platform with director . 18.5. Upgrading Red Hat Ceph Storage and adopting cephadm If your environment includes Red Hat Ceph Storage deployments, you must upgrade your deployment to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm now manages Red Hat Ceph Storage instead of ceph-ansible . Procedure Create an Ansible playbook file named ceph-admin-user-playbook.yaml to create a ceph-admin user on the overcloud nodes. Add the following configuration to the ceph-admin-user-playbook.yaml file: Copy the playbook to the openstackclient container: Run the playbook on the openstackclient container: Update the Red Hat Ceph Storage container image parameters in the containers-prepare-parameter.yaml file for the version of Red Hat Ceph Storage that your deployment uses: Replace <ceph_image_file> with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses: Red Hat Ceph Storage 5: rhceph-5-rhel8 Replace <grafana_image_file> with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses: Red Hat Ceph Storage 5: rhceph-5-dashboard-rhel8 If your deployment includes HCI, update the CephAnsibleRepo parameter in compute-hci.yaml to "rhelosp-ceph-5-tools". Create an environment file named upgrade.yaml and add the following configuration to it: Create a new OpenStackConfigGenerator CR named ceph-upgrade that includes the updated environment file and tripleo-tarball ConfigMaps. Create a file named openstack-ceph-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the upgrade from Red Hat Ceph Storage 4 to 5: Save the openstack-ceph-upgrade.yaml file. Create the OpenStackDeploy resource: Wait for the deployment to finish. Create a file named openstack-ceph-upgrade-packages.yaml on your workstation to define an OpenStackDeploy CR that upgrades the Red Hat Ceph Storage packages: Save the openstack-ceph-upgrade-packages.yaml file. Create the OpenStackDeploy resource: Wait for the deployment to finish. Create a file named openstack-ceph-upgrade-to-cephadm.yaml on your workstation to define an OpenStackDeploy CR that runs the cephadm adoption: Save the openstack-ceph-upgrade-to-cephadm.yaml file. Create the OpenStackDeploy resource: Wait for the deployment to finish. Update the openstackclient image to the RHEL9 container image by removing the current imageURL from the openstackclient CR: 18.6. Upgrading the overcloud to RHOSP17.1 on RHEL8 To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8 you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud. You must update your container preparation file for both RHEL 8 and RHEL 9 hosts: RHEL 9 hosts: All containers are based on RHEL9. RHEL 8 hosts: All containers are based on RHEL9 except for libvirt and collectd . The libvirt and collectd containers must use the same base as the host. You must then generate a new OpenStackConfigGenerator CR before deploying the updates. Procedure Open the container preparation file, containers-prepare-parameter.yaml , and check that it obtains the correct image versions. Add the ContainerImagePrepareRhel8 parameter to containers-prepare-parameter.yaml : Create an environment file named upgrade.yaml . Add the following configuration to the upgrade.yaml file: Create an environment file named disable_compute_service_check.yaml . Add the following configuration to the disable_compute_service_check.yaml file: If your deployment includes HCI, update the Red Hat Ceph Storage and HCI parameters from ceph-ansible values in RHOSP 16.2 to cephadm values in RHOSP 17.1. For more information, see Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator . Create a file named openstack-configgen-upgrade.yaml on your workstation that defines a new OpenStackConfigGenerator CR named "upgrade": Create a file named openstack-upgrade.yaml on your workstation to create an OpenStackDeploy CR for the overcloud upgrade: Save the openstack-upgrade.yaml file. Create the OpenStackDeploy resource: Wait for the deployment to finish. The overcloud nodes are now running 17.1 containers on RHEL8. 18.7. Upgrading the overcloud to RHEL 9 To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9, you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud. You must then generate a new OpenStackConfigGenerator CR before deploying the updates. Procedure Open the container preparation file, containers-prepare-parameter.yaml and check that it obtains the correct image versions. Remove the following role specific overrides from the containers-prepare-paramater.yaml file: Open the roles_data.yaml file and replace OS::TripleO::Services::NovaLibvirtLegacy with OS::TripleO::Services::NovaLibvirt . Create an environment file named skip_rhel_release.yaml , and add the following configuration: Create an environment file named system_upgrade.yaml and add the following configuration: For more information on the recommended Leapp parameters, see Upgrade parameters in the Framework for upgrades (16.2 to 17.1) guide. Create a new OpenStackConfigGenerator CR named system-upgrade that includes the updated heat environment and tripleo tarball ConfigMaps. Create a file named openstack-controller0-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the first controller node: Save the openstack-controller0-upgrade.yaml file. Create the OpenStackDeploy resource to run the system upgrade on Controller 0: Wait for the deployment to finish. Create a file named openstack-controller1-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the second controller node: Save the openstack-controller1-upgrade.yaml file. Create the OpenStackDeploy resource to run the system upgrade on Controller 1: Wait for the deployment to finish. Create a file named openstack-controller2-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the third controller node: Save the openstack-controller2-upgrade.yaml file. Create the OpenStackDeploy resource to run the system upgrade on Controller 1: Wait for the deployment to finish. Create a file named openstack-computes-upgrade.yaml on your workstation to define an OpenStackDeploy CR that upgrades all Compute nodes: Save the openstack-computes-upgrade.yaml file. Create the OpenStackDeploy resource to run the system upgrade on the Compute nodes: Wait for the deployment to finish. 18.8. Performing post-upgrade tasks You must perform some post-upgrade tasks to complete the upgrade after the overcloud upgrades are successfully complete. Procedure Update the baseImageUrl parameter to a RHEL 9.2 guest image in your OpenStackProvisionServer CR and OpenStackBaremetalSet CR. Re-enable fencing on the controllers: Perform any other post-upgrade actions relevant to your environment. For more information, see Performing post-upgrade actions in the Framework for upgrades (16.2 to 17.1) guide.
[ "oc get subscription osp-director-operator.openstack -n openstack -o yaml | grep currentCSV", "oc delete clusterserviceversion <current_CSV> -n openstack", "oc delete subscription osp-director-operator.openstack -n openstack", "oc patch openstackcontrolplane -n openstack overcloud --type=json -p=\"[{'op': 'replace', 'path': '/spec/openStackRelease', 'value': '17.1'}]\"", "oc get csv -n openstack", "oc delete -n openstack openstackconfiggenerator --all", "oc patch openstackclient -n openstack openstackclient --type=json -p=\"[{'op': 'replace', 'path': '/spec/imageURL', 'value': 'registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:17.1'}]\"", "oc patch openstackclient -n openstack openstackclient --type=json -p=\"[{'op': 'remove', 'path': '/spec/imageURL'}]\"", "oc rsh -n openstack openstackclient ssh controller-0.ctlplane \"sudo pcs property set stonith-enabled=false\"", "`OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI` `OS::TripleO::Services::CinderBackendDellPs` `OS::TripleO::Services::CinderBackendVRTSHyperScale` `OS::TripleO::Services::Ec2Api` `OS::TripleO::Services::Fluentd` `OS::TripleO::Services::FluentdAlt` `OS::TripleO::Services::Keepalived` `OS::TripleO::Services::MistralApi` `OS::TripleO::Services::MistralEngine` `OS::TripleO::Services::MistralEventEngine` `OS::TripleO::Services::MistralExecutor` `OS::TripleO::Services::NeutronLbaasv2Agent` `OS::TripleO::Services::NeutronLbaasv2Api` `OS::TripleO::Services::NeutronML2FujitsuCfab` `OS::TripleO::Services::NeutronML2FujitsuFossw` `OS::TripleO::Services::NeutronSriovHostConfig` `OS::TripleO::Services::NovaConsoleauth` `OS::TripleO::Services::Ntp` `OS::TripleO::Services::OpenDaylightApi` `OS::TripleO::Services::OpenDaylightOvs` `OS::TripleO::Services::OpenShift::GlusterFS` `OS::TripleO::Services::OpenShift::Infra` `OS::TripleO::Services::OpenShift::Master` `OS::TripleO::Services::OpenShift::Worker` `OS::TripleO::Services::PankoApi` `OS::TripleO::Services::Rear` `OS::TripleO::Services::SaharaApi` `OS::TripleO::Services::SaharaEngine` `OS::TripleO::Services::SensuClient` `OS::TripleO::Services::SensuClientAlt` `OS::TripleO::Services::SkydiveAgent` `OS::TripleO::Services::SkydiveAnalyzer` `OS::TripleO::Services::Tacker` `OS::TripleO::Services::TripleoUI` `OS::TripleO::Services::UndercloudMinionMessaging` `OS::TripleO::Services::UndercloudUpgradeEphemeralHeat` `OS::TripleO::Services::Zaqar`", "'get_file' 'get_resource' 'digest' 'repeat' 'resource_facade' 'str_replace' 'str_replace_strict' 'str_split' 'map_merge' 'map_replace' 'yaql' 'equals' 'if' 'not' 'and' 'or' 'filter' 'make_url' 'contains'", "- hosts: localhost gather_facts: false tasks: - name: set ssh key path facts set_fact: private_key: \"{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa\" public_key: \"{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa.pub\" run_once: true - name: stat private key stat: path: \"{{ private_key }}\" register: private_key_stat - name: create private key if it does not exist shell: \"ssh-keygen -t rsa -q -N '' -f {{ private_key }}\" no_log: true when: - not private_key_stat.stat.exists - name: stat public key stat: path: \"{{ public_key }}\" register: public_key_stat - name: create public key if it does not exist shell: \"ssh-keygen -y -f {{ private_key }} > {{ public_key }}\" when: - not public_key_stat.stat.exists - hosts: overcloud gather_facts: false become: true pre_tasks: - name: Get local private key slurp: src: \"{{ hostvars['localhost']['private_key'] }}\" register: private_key_get delegate_to: localhost no_log: true - name: Get local public key slurp: src: \"{{ hostvars['localhost']['public_key'] }}\" register: public_key_get delegate_to: localhost roles: - role: tripleo_create_admin tripleo_admin_user: \"{{ tripleo_admin_user }}\" tripleo_admin_pubkey: \"{{ public_key_get['content'] | b64decode }}\" tripleo_admin_prikey: \"{{ private_key_get['content'] | b64decode }}\" no_log: true", "oc cp -n openstack ceph-admin-user-playbook.yml openstackclient:/home/cloud-admin/ceph-admin-user-playbook.yml", "oc rsh -n openstack openstackclient ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory -e tripleo_admin_user=ceph-admin -e distribute_private_key=true /home/cloud-admin/ceph-admin-user-playbook.yml", "ceph_namespace: registry.redhat.io/rhceph ceph_image: <ceph_image_file> ceph_tag: latest ceph_grafana_image: <grafana_image_file> ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest", "parameter_defaults: UpgradeInitCommand: | sudo subscription-manager repos --disable * if USD( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms sudo subscription-manager release --set=9.2 else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-tus-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-highavailability-tus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms sudo subscription-manager release --set=8.4 fi sudo dnf -y install cephadm", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: externalUpgrade advancedSettings: skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - ceph - facts", "oc create -f openstack-ceph-upgrade.yaml -n openstack", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade-packages spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: upgrade advancedSettings: limit: ceph_osd,ceph_mon,Undercloud playbook: - upgrade_steps_playbook.yaml skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - setup_packages", "oc create -f openstack-ceph-upgrade-packages.yaml -n openstack", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade-to-cephadm spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: externalUpgrade advancedSettings: skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - cephadm_adopt", "oc create -f openstack-ceph-upgrade-to-cephadm.yaml -n openstack", "oc patch openstackclient -n openstack openstackclient --type=json -p=\"[{'op': 'remove', 'path': '/spec/imageURL'}]\"", "parameter_defaults: #default container image configuration for RHEL 9 hosts ContainerImagePrepare: - push_destination: false set: &container_image_prepare_rhel9_contents tag: 17.1.2 name_prefix: openstack- namespace: registry.redhat.io/rhosp-rhel9 ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-5-rhel8 ceph_tag: latest ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.10 ceph_grafana_image: rhceph-5-dashboard-rhel8 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.10 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.10 # RHEL8 hosts pin the collectd and libvirt containers to rhosp-rhel8 # To apply the following configuration, reference the followingparameter # in the role specific parameters below: <Role>ContainerImagePrepare ContainerImagePrepareRhel8: &container_image_prepare_rhel8 - push_destination: false set: *container_image_prepare_rhel9_contents excludes: - collectd - nova-libvirt - push_destination: false set: tag: 17.1.2 name_prefix: openstack- namespace: registry.redhat.io/rhosp-rhel8 ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-5-rhel8 ceph_tag: latest ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.10 ceph_grafana_image: rhceph-5-dashboard-rhel8 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.10 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.10 includes: - collectd - nova-libvirt Initially all hosts are RHEL 8 so set the role specific container image prepare parameter to the RHEL 8 configuration ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8", "parameter_defaults: UpgradeInitCommand: | sudo subscription-manager repos --disable * if USD( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9.2-for-x86_64-baseos-eus-rpms --enable=rhel-9.2-for-x86_64-appstream-eus-rpms --enable=rhel-9.2-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms fi", "parameter_defaults: ExtraConfig: nova::workarounds::disable_compute_service_check_for_ffu: true parameter_merge_strategies: ExtraConfig: merge", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: \"upgrade\" namespace: openstack spec: enableFencing: False gitSecret: git-secret heatEnvs: - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml - nova-hw-machine-type-upgrade.yaml - lifecycle/upgrade-prepare.yaml heatEnvConfigMap: heat-env-config-upgrade tarballConfigMap: tripleo-tarball-config-upgrade", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: upgrade spec: configVersion: <config_version> configGenerator: upgrade mode: upgrade", "oc create -f openstack-upgrade.yaml -n openstack", "ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8", "parameter_defaults: SkipRhelEnforcement: false", "parameter_defaults: NICsPrefixesToUdev: ['en'] UpgradeLeappDevelSkip: \"LEAPP_UNSUPPORTED=1 LEAPP_DEVEL_SKIP_CHECK_OS_RELEASE=1 LEAPP_NO_NETWORK_RENAMING=1 LEAPP_DEVEL_TARGET_RELEASE=9.2\" UpgradeLeappDebug: false UpgradeLeappEnabled: true LeappActorsToRemove: ['checkifcfg','persistentnetnamesdisable','checkinstalledkernels','biosdevname'] LeappRepoInitCommand: | sudo subscription-manager repos --disable=* subscription-manager repos --enable rhel-8-for-x86_64-baseos-tus-rpms --enable rhel-8-for-x86_64-appstream-tus-rpms --enable openstack-17.1-for-rhel-8-x86_64-rpms subscription-manager release --set=8.4 UpgradeLeappCommandOptions: \"--enablerepo=rhel-9-for-x86_64-baseos-eus-rpms --enablerepo=rhel-9-for-x86_64-appstream-eus-rpms --enablerepo=rhel-9-for-x86_64-highavailability-eus-rpms --enablerepo=openstack-17.1-for-rhel-9-x86_64-rpms --enablerepo=fast-datapath-for-rhel-9-x86_64-rpms\" LeappInitCommand: | sudo subscription-manager repos --disable=* sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms leapp answer --add --section check_vdo.confirm=True dnf -y remove irb", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-0 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[0] tags: - system_upgrade", "oc create -f openstack-controller0-upgrade.yaml -n openstack", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-1 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[1] tags: - system_upgrade", "oc create -f openstack-controller1-upgrade.yaml -n openstack", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-2 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[2] tags: - system_upgrade", "oc create -f openstack-controller2-upgrade.yaml -n openstack", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-computes spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Compute tags: - system_upgrade", "oc create -f openstack-computes-upgrade.yaml -n openstack", "oc rsh -n openstack openstackclient ssh controller-0.ctlplane \"sudo pcs property set stonith-enabled=true\"" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/assembly_upgrading-an-overcloud-on-rhocp-with-director-operator-162-171
Chapter 6. Bug fixes
Chapter 6. Bug fixes This section describes notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.9. Multicloud Object Gateway storage class deleted during uninstall Previously, Multicloud Object Gateway (MCG) storage class which was deployed as a part of OpenShift Data Foundation deployment was not deleted during uninstall. With this update, the Multicloud Object Gateway (MCG) storage class gets removed while uninstalling OpenShift Data Foundation. ( BZ#1892709 ) OpenShift Container Platform alert when OpenShift Container Storage quorum lost Previously, CephMonQuorumAtRisk alert was fired when mon quorum was about to be lost, but there was no alert triggered after losing the quorum. This resulted in no notification being sent when the mon quorum was completely lost. With this release, a new alert, CephMonQuorumLost is introduced. This alert is triggered when you have only one node left and a single mon is running on it. However, at this point the cluster will be in unrecoverable state and the alert serves as a notification of the issue. (BZ#1944513) Reduce the mon_data_avail_warn from 30 % to 15% Previously, the mon_data_avail_warn alert was triggered when the mon store was less than 30% and it did not match the threshold value of OpenShift Container Platform's garbage collector for images, which is 15%. With this release, you will see the alert when the available storage at the mon store location is less than 15% and not less than 30%. (BZ#1964055) OSD pods do not log anything if the initial deployment is OpenShift Container Storage 4.4 Previously, object storage daemon (OSD) logs were not generated when OpenShift Container Storage 4.4 was deployed. With this update, the OSD logs are generated correctly. (BZ#1974343) Multicloud Object Gateway was not able to initialize in a fresh deployment Previously, after the internal database change from MongoDB to PostgreSQL, duplicate entities that should be unique could be added to the database (MongoDB prevented duplicate entities earlier) due to which Multicloud Object Gateway (MCG) was not working. With this release, duplicate entities are prevented. (BZ#1975645) PVC is restored when using two different backend paths for the encrypted parent Previously, when restoring a persistent volume claim (PVC) from a volume snapshot into a different storage class with a different encryption KMSID , the restored PVC went into the Bound state and the restored PVC failed to get attached to a Pod. This was because the encryption passphrase was being copied with the parent PVC's storage class encryption KMSID config . With this release, the restored PVC's encryption passphrase is copied with the correct encryption KMSID config from the destination storage class. Hence, the PVC is successfully restored into a storage class with a different encryption KMSID than its parent PVC. (BZ#1975730) Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an ENOSPACE error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager and ceph-volume plugin. With this release, the new FULL feature is introduced. This feature gives the Ceph Manager FULL capability, and bypasses the Ceph OSD full check. Additionally, the client_check_pool_permission option can be disabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This allows the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. (BZ#1978769) Keys are completely destroyed in Vault after deleting encrypted persistent volume claims (PVCs) while using the kv-v2 secret engine HashiCorp Vault added a feature for the key-value store v2 where deletion of the stored keys makes it possible to recover the contents in case the metadata of the deleted key is not removed in a separate step. When using key-value v2 storage for secrets in HashiCorp Vault, deletion of volumes did not remove the metadata of the encryption passphrase from the KMS. With this update, the keys in HashiCorp Vault is completely destroyed by default when a PVC is deleted. You can set the new configuration option VAULT_DESTROY_KEYS to false to enable the behavior. In that case, the metadata of the keys will be kept in HashiCorp Vault so that recovery of the encryption passphrase of the removed PVC is possible. ( BZ#1979244 ) Multicloud Object Gateway object bucket creation is going to Pending Phase Previously, after the internal database change from MongoDB to PostgreSQL, duplicate entries that should be unique could be added to the database (MongoDB prevented duplicate entries earlier). As a result, creation of new resources such as buckets, backing stores, and so on failed. With this release, duplicate entries are prevented. (BZ#1980299) Deletion of CephBlockPool gets stuck and blocks the creation of new pools Previously, in a Multus enabled cluster, the Rook Operator did not have access to the object storage daemon (OSD) network as it did not have the network annotations. As a result, the rbd type commands during a pool cleanup would hang because the OSDs could not be contacted. With this release, the operator proxies the rbd command through a sidecar container in the mgr pod and runs successfully during the pool cleanup. ( BZ#1983756 ) Standalone Multicloud Object Gateway failing to connect Previously, the Multicloud Object Gateway (MCG) CR was not updated properly because of the change in the internal DB from MongoDB to PostgreSQL. This caused issues in certain flows. As a result, MCG components were not able to communicate with one another and MCG failures occurred on upgrade. With this release, MCG CR issue is fixed. ( BZ#1984284 ) Monitoring spec is getting reset in CephCluster resource in external mode Previously, when OpenShift Container Storage was upgraded, the monitoring endpoints would get reset in external CephCluster's monitoring spec. This was not an expected behavior and was due to the way monitoring endpoints were passed to the CephCluster. With this update, the way endpoints are passed is changed. Before the CephCluster is created, the endpoints are accessed directly from the JSON secret, rook-ceph-external-cluster-details and the CephCluster spec is updated. As a result, the monitoring endpoint specs in the CephCluster is updated properly with appropriate values even after the OpenShift Container Storage upgrade. ( BZ#1984735 ) CrashLoopBackOff state of noobaa-db-pg-0 pod when enabling hugepages Previously, enabling hugepages on OpenShift Container Platform cluster caused the Multicloud Object Gateway (MCG) database pod to go into a CrashLoopBackOff state. This was due to wrong initialization of PostgreSQL. With this release, MCG database pod's initialization of PostgreSQL is fixed. ( BZ#1995271 ) Multicloud Object Gateway unable to create new object bucket claims Previously, performance degradation when working against the Multicloud Object Gateway (MCG) DB caused back pressure on all the MCG components which resulted in failure to execute flows within the system such as configuration flows and I/O flows. With this update, the most time consuming queries are fixed, the DB is cleared quickly, and no back pressure is created. ( BZ#1998680 ) Buckets fail during creation because of an issue with checking attached resources Previously, because of a problem in checking resources attached to a bucket during its creation, the bucket would fail to be created. The conditions in the resource validation during bucket creation have been fixed, and the buckets are created as expected. ( BZ#2000588 ) NooBaa Operator still checks for noobaa-db service after upgrading Previously, when OpenShift Container Storage was updated from version 4.6, there was a need to retain the old and the new noobaa-db StatefulSets for migration purposes. The code still supports both the names of sets. A failure message was generated on the old noobaa-db StatefulSet due to a small issue in the code which caused the operator to check the status of the old noobaa-db StatefulSet even though it was no longer relevant. With this update, the operator stops checking the status of the old noobaa-db StatefulSet. (BZ#2008821) Changes to the config maps of the Multicloud Object Gateway (MCG) DB pod does not get reconciled after upgrade Previously, changes to the config maps of the MCG DB pod did not apply after an upgrade. The flow has been fixed to properly take the variables from the config maps for the DB pod. (BZ#2012930)
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/4.9_release_notes/bug_fixes
Chapter 1. Overview
Chapter 1. Overview AMQ C++ is a library for developing messaging applications. It enables you to write C++ applications that send and receive AMQP messages. AMQ C++ is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.10 Release Notes . AMQ C++ is based on the Proton API from Apache Qpid . For detailed API documentation, see the AMQ C++ API reference . 1.1. Key features An event-driven API that simplifies integration with existing applications SSL/TLS for secure communication Flexible SASL authentication Automatic reconnect and failover Seamless conversion between AMQP and language-native data types Access to all the features and capabilities of AMQP 1.0 1.2. Supported standards and protocols AMQ C++ supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms supported by Cyrus SASL , including ANONYMOUS, PLAIN, SCRAM, EXTERNAL, and GSSAPI (Kerberos) Modern TCP with IPv6 1.3. Supported configurations Refer to Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal for current information regarding AMQ C++ supported configurations. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Container A top-level container of connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for sending and receiving messages. It contains senders and receivers. Sender A channel for sending messages to a target. It has a target. Receiver A channel for receiving messages from a source. It has a source. Source A named point of origin for messages. Target A named destination for messages. Message An application-specific piece of information. Delivery A message transfer. AMQ C++ sends and receives messages . Messages are transferred between connected peers over senders and receivers . Senders and receivers are established over sessions . Sessions are established over connections . Connections are established between two uniquely identified containers . Though a connection can have multiple sessions, often this is not needed. The API allows you to ignore sessions unless you require them. A sending peer creates a sender to send messages. The sender has a target that identifies a queue or topic at the remote peer. A receiving peer creates a receiver to receive messages. The receiver has a source that identifies a queue or topic at the remote peer. The sending of a message is called a delivery . The message is the content sent, including all metadata such as headers and annotations. The delivery is the protocol exchange associated with the transfer of that content. To indicate that a delivery is complete, either the sender or the receiver settles it. When the other side learns that it has been settled, it will no longer communicate about that delivery. The receiver can also indicate whether it accepts or rejects the message. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/overview
7.8. Using NSCD with SSSD
7.8. Using NSCD with SSSD SSSD is not designed to be used with the NSCD daemon. Even though SSSD does not directly conflict with NSCD, using both services can result in unexpected behavior, especially with how long entries are cached. The most common evidence of a problem is conflicts with NFS. When using Network Manager to manage network connections, it may take several minutes for the network interface to come up. During this time, various services attempt to start. If these services start before the network is up and the DNS servers are available, these services fail to identify the forward or reverse DNS entries they need. These services will read an incorrect or possibly empty resolv.conf file. This file is typically only read once, and so any changes made to this file are not automatically applied. This can cause NFS locking to fail on the machine where the NSCD service is running, unless that service is manually restarted. To avoid potential conflicts and performance issues, do not run NSCD and SSSD services simultaneously on the same system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/usingnscd-sssd
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_policy_guide/con-conscious-language-message
probe::nfs.proc.open
probe::nfs.proc.open Name probe::nfs.proc.open - NFS client allocates file read/write context information Synopsis nfs.proc.open Values flag file flag filename file name version NFS version (the function is used for all NFS version) prot transfer protocol mode file mode server_ip IP address of server Description Allocate file read/write context information
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-proc-open
Chapter 5. RHEL 8.3.0 release
Chapter 5. RHEL 8.3.0 release 5.1. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.3. 5.1.1. Installer and image creation Anaconda rebased to version 33.16 With this release, Anaconda has been rebased to version 33.16. This version provides the following notable enhancements over the version. The Installation Program now displays static IPv6 addresses on multiple lines and no longer resizes the windows. The Installation Program now displays supported NVDIMM device sector sizes. Host name is now configured correctly on an installed system having IPv6 static configuration. You can now use non-ASCII characters in disk encryption passphrase. The Installation Program displays a proper recommendation to create a new file system on /boot, /tmp, and all /var and /usr mount points except /usr/local and /var/www. The Installation Program now correctly checks the keyboard layout and does not change the status of the Keyboard Layout screen when the keyboard keys (ALT+SHIFT) are used to switch between different layouts and languages. Rescue mode no longer fails on systems with existing RAID1 partitions. Changing of the LUKS version of the container is now available in the Manual Partitioning screen. The Installation Program successfully finishes the installation without the btrfs-progs package. The Installation Program now uses the default LUKS2 version for an encrypted container. The Installation Program no longer crashes when a Kickstart file places physical volumes (PVs) of a Logical volume group (VG) on an ignoredisk list. Introduces a new mount path /mnt/sysroot for system root. This path is used to mount / of the target system. Usually, the physical root and the system root are the same, so /mnt/sysroot is attached to the same file system as /mnt/sysimage . The only exceptions are rpm-ostree systems, where the system root changes based on the deployment. Then, /mnt/sysroot is attached to a subdirectory of /mnt/sysimage . It is recommended to use /mnt/sysroot for chroot. ( BZ#1691319 , BZ#1679893 , BZ#1684045, BZ#1688478 , BZ#1700450, BZ#1720145, BZ#1723888, BZ#1754977 , BZ#1755996 , BZ#1784360 , BZ#1796310, BZ#1871680) GUI changes in RHEL Installation Program The RHEL Installation Program now includes the following user settings on the Installation Summary window: Root password User creation With this change, you can now configure a root password and create a user account before you begin the installation. Previously, you configured a root password and created a user account after you began the installation process. A root password is used to log in to the administrator (also known as superuser or root) account which is used for system administration tasks. The user name is used to log in from a command line; if you install a graphical environment, then your graphical login manager uses the full name. For more details, see Interactively installing RHEL from installation media document. (JIRA:RHELPLAN-40469) Image Builder backend osbuild-composer replaces lorax-composer The osbuild-composer backend replaces lorax-composer . The new service provides REST APIs for image building. As a result, users can benefit from a more reliable backend and more predictable output images. (BZ#1836211) Image Builder osbuild-composer supports a set of image types With the osbuild-composer backend replacement, the following set of image types supported in osbuild-composer this time: TAR Archive (.tar) QEMU QCOW2 (.qcow2) VMware Virtual Machine Disk (.vmdk) Amazon Machine Image (.ami) Azure Disk Image (.vhd) OpenStack Image (.qcow2) The following outputs are not supported this time: ext4-filesystem partitioned-disk Alibaba Cloud Google GCE (JIRA:RHELPLAN-42617) Image Builder now supports push to clouds through GUI With this enhancement, when creating images, users can choose the option of pushing to Azure and AWS service clouds through GUI Image Builder . As a result, users can benefit from easier uploads and instantiation. (JIRA:RHELPLAN-30878) 5.1.2. RHEL for Edge Introducing RHEL for Edge images With this release, you can now create customized RHEL images for Edge servers. You can use Image Builder to create RHEL for Edge images, and then use RHEL installer to deploy them on AMD and Intel 64-bit systems. Image Builder generates a RHEL for Edge image as rhel-edge-commit in a .tar file. A RHEL for Edge image is an rpm-ostree image that includes system packages for remotely installing RHEL on Edge servers. The system packages include: Base OS package Podman as the container engine You can customize the image to configure the OS content as per your requirements, and can deploy them on physical and virtual machines. With a RHEL for Edge image, you can achieve the following: Atomic upgrades, where the state of each update is known and no changes are seen until you reboot the device. Custom health checks using Greenboot and intelligent rollbacks for resiliency in case of failed upgrades. Container-focused workflows, where you can separate core OS updates from the application updates, and test and deploy different versions of applications. Optimized OTA payloads for low-bandwidth environments. Custom health checks using Greenboot to ensure resiliency. For more information about composing, installing, and managing RHEL for Edge images, see Composing, Installing, and Managing RHEL for Edge images . (JIRA:RHELPLAN-56676) 5.1.3. Software management The default value for the best dnf configuration option has been changed from True to False With this update, the value for the best dnf configuration option has been set to True in the default configuration file to retain the original dnf behavior. As a result, for users that use the default configuration file the behavior remains unchanged. If you provide your own configuration files, make sure that the best=True option is present to retain the original behavior. ( BZ#1832869 ) New --norepopath option for the dnf reposync command is now available Previously, the reposync command created a subdirectory under the --download-path directory for each downloaded repository by default. With this update, the --norepopath option has been introduced, and reposync does not create the subdirectory. As a result, the repository is downloaded directly into the directory specified by --download-path . This option is also present in the YUM v3 . ( BZ#1842285 ) Ability to enable and disable the libdnf plugins Previously, subscription checking was hardcoded into the RHEL version of the libdnf plug-ins. With this update, the microdnf utility can enable and disable the libdnf plug-ins, and subscription checking can now be disabled the same way as in DNF. To disable subscription checking, use the --disableplugin=subscription-manager command. To disable all plug-ins, use the --noplugins command. ( BZ#1781126 ) 5.1.4. Shells and command-line tools ReaR updates RHEL 8.3 introduces a number of updates to the Relax-and-Recover ( ReaR ) utility. Notable changes include: Support for the third-party Rubrik Cloud Data Management (CDM) as external backup software has been added. To use it, set the BACKUP option in the configuration file to CDM . Creation of a rescue image with a file larger than 4 GB on the IBM POWER, little endian architecture has been enabled. Disk layout created by ReaR no longer includes entries for Rancher 2 Longhorn iSCSI devices and file systems. (BZ#1743303) smartmontools rebased to version 7.1 The smartmontools package has been upgraded to version 7.1, which provides multiple bug fixes and enhancements. Notable changes include: HDD, SSD and USB additions to the drive database. New options -j and --json to enable JSON output mode. Workaround for the incomplete Log subpages response from some SAS SSDs. Improved handling of READ CAPACITY command. Various improvements for the decoding of the log pages. ( BZ#1671154 ) opencryptoki rebased to version 3.14.0 The opencryptoki packages have been upgraded to version 3.14.0, which provides multiple bug fixes and enhancements. Notable changes include: EP11 cryptographic service enhancements: Dilithium support Edwards-curve digital signature algorithm (EdDSA) support Support of Rivest-Shamir-Adleman optimal asymmetric encryption padding (RSA-OAEP) with non-SHA1 hash and mask generation function (MGF) Enhanced process and thread locking Enhanced btree and object locking Support for new IBM Z hardware z15 Support of multiple token instances for trusted platform module (TPM), IBM cryptographic architecture (ICA) and integrated cryptographic service facility (ICSF) Added a new tool p11sak , which lists the token keys in an openCryptoki token repository Added a utility to migrate a token repository to FIPS compliant encryption Fixed pkcsep11_migrate tool Minor fixes of the ICSF software (BZ#1780293) gpgme rebased to version 1.13.1. The gpgme packages have been upgraded to upstream version 1.13.1. Notable changes include: New context flags no-symkey-cache (has an effect when used with GnuPG 2.2.7 or later), request-origin (has an effect when used with GnuPG 2.2.6 or later), auto-key-locate , and trust-model have been introduced. New tool gpgme-json as native messaging server for web browsers has been added. As of now, the public key encryption and decryption is supported. New encryption API to support direct key specification including hidden recipients option and taking keys from a file has been introduced. This also allows the use of a subkey. ( BZ#1829822 ) 5.1.5. Infrastructure services powertop rebased to version 2.12 The powertop packages have been upgraded to version 2.12. Notable changes over the previously available version 2.11 include: Use of Device Interface Power Management (DIPM) for SATA link PM. Support for Intel Comet Lake mobile and desktop systems, the Skylake server, and the Atom-based Tremont architecture (Jasper Lake). (BZ#1783110) tuned rebased to version 2.14.0 The tuned packages have been upgraded to upstream version 2.14.0. Notable enhancements include: The optimize-serial-console profile has been introduced. Support for a post loaded profile has been added. The irqbalance plugin for handling irqbalance settings has been added. Architecture specific tuning for Marvell ThunderX and AMD based platforms has been added. Scheduler plugin has been extended to support cgroups-v1 for CPU affinity setting. ( BZ#1792264 ) tcpdump rebased to version 4.9.3 The tcpdump utility has been updated to version 4.9.3 to fix Common Vulnerabilities and Exposures (CVE). ( BZ#1804063 ) libpcap rebased to version 1.9.1 The libpcap packages have been updated to version 1.9.1 to fix Common Vulnerabilities and Exposures (CVE). ( BZ#1806422 ) iperf3 now supports sctp option on the client side With this enhancement, the user can use Stream Control Transmission Protocol (SCTP) instead of Transmission Control Protocol (TCP) on the client side of testing network throughput. The following options for iperf3 are now available on the client side of testing: --sctp --xbind --nstreams To obtain more information, see Client Specific Options in the iperf3 man page. (BZ#1665142) iperf3 now supports SSL With this enhancement, the user can use RSA authentication between the client and the server to restrict the connections to the server only to legitimate clients. The following options for iperf3 are now available on the server side: --rsa-private-key-path --authorized-users-path The following options for iperf3 are now available on the client side of communication: --username --rsa-public-key-path ( BZ#1700497 ) bind rebased to 9.11.20 The bind package has been upgraded to version 9.11.20, which provides multiple bug fixes and enhancements. Notable changes include: Increased reliability on systems with many CPU cores by fixing several race conditions. Detailed error reporting: dig and other tools can now print the Extended DNS Error (EDE) option, if it is present. Message IDs in inbound DNS Zone Transfer Protocol (AXFR) transfers are checked and logged, when they are inconsistent. (BZ#1818785) A new optimize-serial-console TuneD profile to reduce I/O to serial consoles by lowering the printk value With this update, a new optimize-serial-console TuneD profile is available. In some scenarios, kernel drivers can send large amounts of I/O operations to the serial console. Such behavior can cause temporary unresponsiveness while the I/O is written to the serial console. The optimize-serial-console profile reduces this I/O by lowering the printk value from the default of 7 4 1 7 to 4 4 1 7 . Users with a serial console who wish to make this change on their system can instrument their system as follows: As a result, users will have a lower printk value that persists across a reboot, which reduces the likelihood of system hangs. This TuneD profile reduces the amount of I/O written to the serial console by removing debugging information. If you need to collect this debugging information, you should ensure this profile is not enabled and that your printk value is set to 7 4 1 7 . To check the value of printk run: ( BZ#1840689 ) New TuneD profiles added for the AMD-based platforms In RHEL 8.3, the throughput-performance TuneD profile was updated to include tuning for the AMD-based platforms. There is no need to change any parameter manually and the tuning is automatically applied on the AMD system. The AMD Epyc Naples and Rome systems alters the following parameters in the default throughput-performance profile: sched_migration_cost_ns=5000000 and kernel.numa_balancing=0 With this enhancement, the system performance is improved by ~5%. (BZ#1746957) memcached rebased to version 1.5.22 The memcached packages have been upgraded to version 1.5.22. Notable changes over the version include: TLS has been enabled. The -o inline_ascii_response option has been removed. The -Y [authfile] option has been added along with authentication mode for the ASCII protocol. memcached can now recover its cache between restarts. New experimental meta commands have been added. Various performance improvements. ( BZ#1809536 ) 5.1.6. Security Cyrus SASL now supports channel bindings with the SASL/GSSAPI and SASL/GSS-SPNEGO plug-ins This update adds support for channel bindings with the SASL/GSSAPI and SASL/GSS-SPNEGO plug-ins. As a result, when used in the openldap libraries, this feature enables Cyrus SASL to maintain compatibility with and access to Microsoft Active Directory and Microsoft Windows systems which are introducing mandatory channel binding for LDAP connections. ( BZ#1817054 ) Libreswan rebased to 3.32 With this update, Libreswan has been rebased to upstream version 3.32, which includes several new features and bug fixes. Notable features include: Libreswan no longer requires separate FIPS 140-2 certification. Libreswan now implements the cryptographic recommendations of RFC 8247, and changes the preference from SHA-1 and RSA-PKCS v1.5 to SHA-2 and RSA-PSS. Libreswan supports XFRMi virtual ipsecXX interfaces that simplify writing firewall rules. Recovery of crashed and rebooted nodes in a full-mesh encryption network is improved. ( BZ#1820206 ) The libssh library has been rebased to version 0.9.4 The libssh library, which implements the SSH protocol, has been upgraded to version 0.9.4. This update includes bug fixes and enhancements, including: Added support for Ed25519 keys in PEM files. Added support for diffie-hellman-group14-sha256 key exchange algorithm. Added support for localuser in Match keyword in the libssh client configuration file. Match criteria keyword arguments are now case-sensitive (note that keywords are case-insensitive, but keyword arguments are case-sensitive) Fixed CVE-2019-14889 and CVE-2020-1730. Added support for recursively creating missing directories found in the path string provided for the known hosts file. Added support for OpenSSH keys in PEM files with comments and leading white spaces. Removed the OpenSSH server configuration inclusion from the libssh server configuration. ( BZ#1804797 ) gnutls rebased to 3.6.14 The gnutls packages have been rebased to upstream version 3.6.14. This version provides many bug fixes and enhancements, most notably: gnutls now rejects certificates with Time fields that contain invalid characters or formatting. gnutls now checks trusted CA certificates for minimum key sizes. When displaying an encrypted private key, the certtool utility no longer includes its plain text description. Servers using gnutls now advertise OCSP-stapling support. Clients using gnutls now send OCSP staples only on request. ( BZ#1789392 ) gnutls FIPS DH checks now conform with NIST SP 800-56A rev. 3 This update of the gnutls packages provides checks required by NIST Special Publication 800-56A Revision 3, sections 5.7.1.1 and 5.7.1.2, step 2. The change is necessary for future FIPS 140-2 certifications. As a result, gnutls now accept only 2048-bit or larger parameters from RFC 7919 and RFC 3526 during the Diffie-Hellman key exchange when operating in FIPS mode. ( BZ#1849079 ) gnutls now performs validations according to NIST SP 800-56A rev 3 This update of the gnutls packages adds checks required by NIST Special Publication 800-56A Revision 3, sections 5.6.2.2.2 and 5.6.2.1.3, step 2. The addition prepares gnutls for future FIPS 140-2 certifications. As a result, gnutls perform additional validation steps for generated and received public keys during the Diffie-Hellman key exchange when operating in FIPS mode. (BZ#1855803) update-crypto-policies and fips-mode-setup moved into crypto-policies-scripts The update-crypto-policies and fips-mode-setup scripts, which were previously included in the crypto-policies package, are now moved into a separate RPM subpackage crypto-policies-scripts . The package is automatically installed through the Recommends dependency on regular installations. This enables the ubi8/ubi-minimal image to avoid the inclusion of the Python language interpreter and thus reduces the image size. ( BZ#1832743 ) OpenSC rebased to version 0.20.0 The opensc package has been rebased to version 0.20.0 which addresses multiple bugs and security issues. Notable changes include: With this update, CVE-2019-6502 , CVE-2019-15946 , CVE-2019-15945 , CVE-2019-19480 , CVE-2019-19481 and CVE-2019-19479 security issues are fixed. The OpenSC module now supports the C_WrapKey and C_UnwrapKey functions. You can now use the facility to detect insertion and removal of card readers as expected. The pkcs11-tool utility now supports the CKA_ALLOWED_MECHANISMS attribute. This update allows default detection of the OsEID cards. The OpenPGP Card v3 now supports Elliptic Curve Cryptography (ECC). The PKCS#11 URI now truncates the reader name with ellipsis. ( BZ#1810660 ) stunnel rebased to version 5.56 With this update, the stunnel encryption wrapper has been rebased to upstream version 5.56, which includes several new features and bug fixes. Notable features include: New ticketKeySecret and ticketMacSecret options that control confidentiality and integrity protection of the issued session tickets. These options enable you to resume sessions on other nodes in a cluster. New curves option to control the list of elliptic curves in OpenSSL 1.1.0 and later. New ciphersuites option to control the list of permitted TLS 1.3 ciphersuites. Added sslVersion , sslVersionMin and sslVersionMax for OpenSSL 1.1.0 and later. ( BZ#1808365 ) libkcapi rebased to version 1.2.0 The libkcapi package has been rebased to upstream version 1.2.0, which includes minor changes. (BZ#1683123) setools rebased to 4.3.0 The setools package, which is a collection of tools designed to facilitate SELinux policy analysis, has been upgraded to version 4.3.0. This update includes bug fixes and enhancements, including: Revised sediff method for Type Enforcement (TE) rules, which significantly reduces memory and runtime issues. Added infiniband context support to seinfo , sediff , and apol . Added apol configuration for the location of the Qt assistant tool used to display online documentation. Fixed sediff issues with: Properties header displaying when not requested. Name comparison of type_transition files. Fixed permission of map socket sendto information flow direction. Added methods to the TypeAttribute class to make it a complete Python collection. Genfscon now looks up classes, rather than using fixed values which were dropped from libsepol . The setools package requires the following packages: setools-console setools-console-analyses setools-gui ( BZ#1820079 ) Individual CephFS files and directories can now have SELinux labels The Ceph File System (CephFS) has recently enabled storing SELinux labels in the extended attributes of files. Previously, all files in a CephFS volume were labeled with a single common label system_u:object_r:cephfs_t:s0 . With this enhancement, you can change the labels for individual files, and SELinux defines the labels of newly created files based on transition rules. Note that previously unlabeled files still have the system_u:object_r:cephfs_t:s0 label until explicitly changed. ( BZ#1823764 ) OpenSCAP rebased to version 1.3.3 The openscap packages have been upgraded to upstream version 1.3.3, which provides many bug fixes and enhancements over the version, most notably: Added the autotailor script that enables you to generate tailoring files using a command-line interface (CLI). Added the timezone part to the Extensible Configuration Checklist Description Format (XCCDF) TestResult start and end time stamps Added the yamlfilecontent independent probe as a draft implementation. Introduced the urn:xccdf:fix:script:kubernetes fix type in XCCDF. Added ability to generate the machineconfig fix. The oscap-podman tool can now detect ambiguous scan targets. The rpmverifyfile probe can now verify files from the /bin directory. Fixed crashes when complicated regexes are executed in the textfilecontent58 probe. Evaluation characteristics of the XCCDF report are now consistent with OVAL entities from the system_info probe. Fixed file-path pattern matching in offline mode in the textfilecontent58 probe. Fixed infinite recursion in the systemdunitdependency probe. ( BZ#1829761 ) SCAP Security Guide now provides a profile aligned with the CIS RHEL 8 Benchmark v1.0.0 With this update, the scap-security-guide packages provide a profile aligned with the CIS Red Hat Enterprise Linux 8 Benchmark v1.0.0. The profile enables you to harden the configuration of the system using the guidelines by the Center for Internet Security (CIS). As a result, you can configure and automate compliance of your RHEL 8 systems with CIS by using the CIS Ansible Playbook and the CIS SCAP profile. Note that the rpm_verify_permissions rule in the CIS profile does not work correctly. ( BZ#1760734 ) scap-security-guide now provides a profile that implements HIPAA This update of the scap-security-guide packages adds the Health Insurance Portability and Accountability Act (HIPAA) profile to the RHEL 8 security compliance content. This profile implements recommendations outlined on the The HIPAA Privacy Rule website. The HIPAA Security Rule establishes U.S. national standards to protect individuals' electronic personal health information that is created, received, used, or maintained by a covered entity. The Security Rule requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of electronically protected health information. ( BZ#1832760 ) scap-security-guide rebased to 0.1.50 The scap-security-guide packages, which contain the latest set of security policies for Linux systems, have been upgraded to version 0.1.50. This update includes bug fixes and enhancements, most notably: Ansible content has been improved: numerous rules contain Ansible remediations for the first time and other rules have been updated to address bug fixes. Fixes and improvements to the scap-security-guide content for scanning RHEL7 systems, including: The scap-security-guide packages now provide a profile aligned with the CIS RHEL 7 Benchmark v2.2.0. Note that the rpm_verify_permissions rule in the CIS profile does not work correctly; see the rpm_verify_permissions fails in the CIS profile known issue. The SCAP Security Guide profiles now correctly disable and mask services that should not be started. The audit_rules_privileged_commands rule in the scap-security-guide packages now works correctly for privileged commands. Remediation of the dconf_gnome_login_banner_text rule in the scap-security-guide packages no longer incorrectly fails. ( BZ#1815007 ) SCAP Workbench can now generate results-based remediations from tailored profiles With this update, you can now generate result-based remediation roles from tailored profiles using the SCAP Workbench tool. (BZ#1640715) New Ansible role provides automated deployments of Clevis clients This update of the rhel-system-roles package introduces the nbde_client RHEL system role. This Ansible role enables you to deploy multiple Clevis clients in an automated way. ( BZ#1716040 ) New Ansible role can now set up a Tang server With this enhancement, you can deploy and manage a Tang server as part of an automated disk encryption solution with the new nbde_server system role. The nbde_server Ansible role, which is included in the rhel-system-roles package, supports the following features: Rotating Tang keys Deploying and backing up Tang keys For more information, see Rotating Tang server keys . ( BZ#1716039 ) clevis rebased to version 13 The clevis packages have been rebased to version 13, which provides multiple bug fixes and enhancements. Notable changes include: clevis luks unlock can be used in the device with a key file in the non-interactive mode. clevis encrypt tpm2 parses the pcr_ids field if the input is given as a JSON array. The clevis-luks-unbind(1) man page no longer refers only to LUKS v1. clevis luks bind does not write to an inactive slot anymore, if the password given is incorrect. clevis luks bind now works while the system uses the non-English locale. Added support for tpm2-tools 4.x. ( BZ#1818780 ) clevis luks edit enables you to edit a specific pin configuration This update of the clevis packages introduces the new clevis luks edit subcommand that enables you to edit a specific pin configuration. For example, you can now change the URL address of a Tang server and the pcr_ids parameter in a TPM2 configuration. You can also add and remove new sss pins and change the threshold of an sss pin. (BZ#1436735) clevis luks bind -y now allows automated binding With this enhancement, Clevis supports automated binding with the -y parameter. You can now use the -y option with the clevis luks bind command, which automatically answers subsequent prompts with yes . For example, when using a Tang pin, you are no longer required to manually trust Tang keys. (BZ#1819767) fapolicyd rebased to version 1.0 The fapolicyd packages have been rebased to version 1.0, which provides multiple bug fixes and enhancements. Notable changes include: The multiple thread synchronization problem has been resolved. Enhanced performance with reduced database size and loading time. A new trust option for the fapolicyd package in the fapolicyd.conf file has been added to customize trust back end. You can add all trusted files, binaries, and scripts to the new /etc/fapolicyd/fapolicyd.trust file. You can manage the fapolicyd.trust file using the CLI. You can clean or dump the database using the CLI. The fapolicyd package overrides the magic database for better decoding of scripts. The CLI prints MIME type of the file similar to the file command according to the override. The /etc/fapolicyd/fapolicyd.rules file supports a group of values as attribute values. The fapolicyd daemon has a syslog_format option for setting the format of the audit/sylog events. ( BZ#1817413 ) fapolicyd now provides its own SELinux policy in fapolicyd-selinux With this enhancement, the fapolicyd framework now provides its own SELinux security policy. The daemon is confined under the fapolicyd_t domain and the policy is installed through the fapolicyd-selinux subpackage. ( BZ#1714529 ) USBGuard rebased to version 0.7.8 The usbguard packages have been rebased to version 0.7.8 which provides multiple bug fixes and enhancements. Notable changes include: The HidePII=true|false parameter in the /etc/usbguard/usbguard-daemon.conf file can now hide personally identifiable information from audit entries. The AuthorizedDefault=keep|none|all|internal parameter in the /etc/usbguard/usbguard-daemon.conf file can predefine authorization state of controller devices. With the new with-connect-type rule attribute, users can now distinguish the connection type of the device. Users can now append temporary rules with the -t option. Temporary rules remain in memory only until the daemon restarts. usbguard list-rules can now filter rules according to certain properties. usbguard generate-policy can now generate a policy for specific devices. The usbguard allow|block|reject command can now handle rule strings, and a target is applied on each device that matches the specified rule string. New subpackages usbguard-notifier and usbguard-selinux are included. ( BZ#1738590 ) USBGuard provides many improvements for corporate desktop users This addition to the USBGuard project contains enhancements and bug fixes to improve the usability for corporate desktop users. Important changes include: For keeping the /etc/usbguard/rules.conf rule file clean, users can define multiple configuration files inside the RuleFolder=/etc/usbguard/rules.d/ directory. By default, the RuleFolder is specified in the /etc/usbguard-daemon.conf file. The usbguard-notifier tool now provides GUI notifications. The tool notifies the user whenever a device is plugged in or plugged out and whether the device is allowed, blocked, or rejected by any user. You can now include comments in the configuration files, because the usbguard-daemon no longer parses lines starting with # . ( BZ#1667395 ) USBGuard now provides its own SELinux policy in usbguard-selinux With this enhancement, the USBGuard framework now provides its own SELinux security policy. The daemon is confined under the usbguard_t domain and the policy is installed through the usbguard-selinux subpackage. ( BZ#1683567 ) libcap now supports ambient capabilities With this update, users are able to grant ambient capabilities at login and prevent the need to have root access for the appropriately configured processes. (BZ#1487388) The libseccomp library has been rebased to version 2.4.3 The libseccomp library, which provides an interface to the seccomp system call filtering mechanism, has been upgraded to version 2.4.3. This update provides numerous bug fixes and enhancements. Notable changes include: Updated the syscall table for Linux v5.4-rc4. No longer defining __NR_x values for system calls that do not exist. __SNR_x is now used internally. Added define for __SNR_ppoll . Fixed a multiplexing issue with s390/s390x shm* system calls. Removed the static flag from the libseccomp tools compilation. Added support for io-uring related system calls. Fixed the Python module naming issue introduced in the v2.4.0 release; the module is named seccomp as it was previously. Fixed a potential memory leak identified by clang in the scmp_bpf_sim tool. ( BZ#1770693 ) omamqp1 module is now supported With this update, the AMQP 1.0 protocol supports sending messages to a destination on the bus. Previously, Openstack used the AMQP1 protocol as a communication standard, and this protocol can now log messages in AMQP messages. This update introduces the rsyslog-omamqp1 sub-package to deliver the omamqp1 output mode, which logs messages and sends them to the destination on the bus. ( BZ#1713427 ) OpenSCAP compresses remote content With this update, OpenSCAP uses gzip compression for transferring remote content. The most common type of remote content is text-based CVE feeds, which increase in size over time and typically have to be downloaded for every scan. The gzip compression reduces the bandwidth to 10% of bandwidth needed for uncompressed content. As a result, this reduces bandwidth requirements across the entire chain between the scanned system and the server that hosts the remote content. ( BZ#1855708 ) SCAP Security Guide now provides a profile aligned with NIST-800-171 With this update, the scap-security-guide packages provide a profile aligned with the NIST-800-171 standard. The profile enables you to harden the system configuration in accordance with security requirements for protection of Controlled Unclassified Information (CUI) in non-federal information systems. As a result, you can more easily configure systems to be aligned with the NIST-800-171 standard. ( BZ#1762962 ) 5.1.7. Networking The IPv4 and IPv6 connection tracking modules have been merged into the nf_conntrack module This enhancement merges the nf_conntrack_ipv4 and nf_conntrack_ipv6 Netfilter connection tracking modules into the nf_conntrack kernel module. Due to this change, blacklisting the address family-specific modules no longer work in RHEL 8.3, and you can blacklist only the nf_conntrack module to disable connection tracking support for both the IPv4 and IPv6 protocols. (BZ#1822085) firewalld rebased to version 0.8.2 The firewalld packages have been upgraded to upstream version 0.8.2, which provides a number of bug fixes over the version. For details, see the firewalld 0.8.2 Release Notes . ( BZ#1809636 ) NetworkManager rebased to version 1.26.0 The NetworkManager packages have been upgraded to upstream version 1.26.0, which provides a number of enhancements and bug fixes over the version: NetworkManager resets the auto-negotiation, speed, and duplex setting to their original value when deactivating a device. Wi-Fi profiles connect now automatically if all activation attempts failed. This means that an initial failure to auto-connect to the network no longer blocks the automatism. A side effect is that existing Wi-Fi profiles that were previously blocked now connect automatically. The nm-settings-nmcli(5) and nm-settings-dbus(5) man pages have been added. Support for a number of bridge parameters has been added. Support for virtual routing and forwarding (VRF) interfaces has been added. For further details, see Permanently reusing the same IP address on different interfaces . Support for Opportunistic Wireless Encryption mode (OWE) for Wi-Fi networks has been added. NetworkManager now supports 31-bit prefixes on IPv4 point-to-point links according to RFC 3021 . The nmcli utility now supports removing settings using the nmcli connection modify <connection_name> remove <setting> command. NetworkManager no longer creates and activates slave devices if a master device is missing. For further information about notable changes, read the upstream release notes: NetworkManager 1.26.0 NetworkManager 1.24.0 ( BZ#1814746 ) XDP is conditionally supported Red Hat supports the eXpress Data Path (XDP) feature only if all of the following conditions apply: You load the XDP program on an AMD or Intel 64-bit architecture You use the libxdp library to load the program into the kernel The XDP program uses one of the following return codes: XDP_ABORTED , XDP_DROP , or XDP_PASS The XDP program does not use the XDP hardware offloading For details about unsupported XDP features, see Overview of XDP features that are available as Technology Preview ( BZ#1889736 ) xdp-tools is partially supported The xdp-tools package, which contains user space support utilities for the kernel eXpress Data Path (XDP) feature, is now supported on the AMD and Intel 64-bit architectures. This includes the libxdp library, the xdp-loader utility for loading XDP programs, and the xdp-filter example program for packet filtering. Note that the xdpdump utility for capturing packets from a network interface with XDP enabled is still a Technology Preview. (BZ#1820670) The dracut utility by default now uses NetworkManager in initial RAM disk Previously, the dracut utility was using a shell script to manage networking in the initial RAM disk, initrd . In certain cases, this could cause problems. For example, the NetworkManager sends another DHCP request, even if the script in the RAM disk has already requested an IP address, which could result in a timeout. With this update, the dracut by default now uses the NetworkManager in the initial RAM disk and prevents the system from running into issues. In case you want to switch back to the implementation, and recreate the RAM disk images, use the following commands: (BZ#1626348) Network configuration in the kernel command line has been consolidated under the ip parameter The ipv6 , netmask , gateway , and hostname parameters to set the network configuration in the kernel command line have been consolidated under the ip parameter. The ip parameter accepts different formats, such as the following: For further details about the individual fields and other formats this parameter accepts, see the description of the ip parameter in the dracut.cmdline(7) man page. The ipv6 , netmask , gateway , and hostname parameters are no longer available in RHEL 8. (BZ#1905138) 5.1.8. Kernel Kernel version in RHEL 8.3 Red Hat Enterprise Linux 8.3 is distributed with the kernel version 4.18.0-240. ( BZ#1839151 ) Extended Berkeley Packet Filter for RHEL 8.3 The Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine executes a special assembly-like code. The eBPF bytecode first loads to the kernel, followed by its verification, code translation to the native machine code with just-in-time compilation, and then the virtual machine executes the code. Red Hat ships numerous components that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. In RHEL 8.3, the following eBPF components are supported: The BPF Compiler Collection (BCC) tools package, which provides tools for I/O analysis, networking, and monitoring of Linux operating systems using eBPF The BCC library which allows the development of tools similar to those provided in the BCC tools package. The eBPF for Traffic Control (tc) feature, which enables programmable packet processing inside the kernel network data path. The eXpress Data Path (XDP) feature, which provides access to received packets before the kernel networking stack processes them, is supported under specific conditions. For more details, refer to the Networking section of Relase Notes. The libbpf package, which is crucial for bpf related applications like bpftrace and bpf/xdp development. For more details, refer to the dedicated release note libbpf fully supported . The xdp-tools package, which contains userspace support utilities for the XDP feature, is now supported on the AMD and Intel 64-bit architectures.This includes the libxdp library, the xdp-loader utility for loading XDP programs, and the xdp-filter example program for packet filtering. Note that the xdpdump utility for capturing packets from a network interface with XDP enabled is still an unsupported Technology Preview. For more details, refer to the Networking section of Release Notes. Note that all other eBPF components are available as Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as Technology Preview: The bpftrace tracing language The AF_XDP socket for connecting the eXpress Data Path (XDP) path to user space For more information regarding the Technology Preview components, see Technology Previews . ( BZ#1780124 ) Cornelis Networks Omni-Path Architecture (OPA) Host Software Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8.3. OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. ( BZ#1893174 ) TSX is now disabled by default Starting with RHEL 8.3, the kernel now has the Intel(R) Transactional Synchronization Extensions (TSX) technology disabled by default to improve the OS security. The change applies to those CPUs that support disabling TSX , including the 2nd Generation Intel(R) Xeon(R) Scalable Processors (formerly known as Cascade Lake with Intel(R) C620 Series Chipsets). For users whose applications do not use TSX , the change removes the default performance penalty of the TSX Asynchronous Abort (TAA) mitigations on the 2nd Generation Intel(R) Xeon(R) Scalable Processors. The change also aligns the RHEL kernel behavior with upstream, where TSX has been disabled by default since Linux 5.4. To enable TSX , add the tsx=on parameter to the kernel command line. (BZ#1828642) RHEL 8.3 now supports the page owner tracking feature With this update, you can use the page owner tracking feature to observe the kernel memory utilization at the page allocation level. To enable the page tracker, execute the following steps : As a result, the page owner tracker will track the kernel memory consumption, which helps to debug kernel memory leaks and detect the drivers that use a lot of memory. (BZ#1825414) EDAC for AMD EPYCTM 7003 Series Processors is now supported This enhancement provides Error Detection And Correction (EDAC) device support for AMD EPYCTM 7003 Series Processors. Previously, corrected (CEs) and uncorrected (UEs) memory errors were not reported on systems based on AMD EPYCTM 7003 Series Processors. With this update, such errors will now be reported using EDAC. (BZ#1735611) Flamegraph is now supported with perf tool With this update, the perf command line tool supports flamegraphs to create a graphical representation of the system's performance. The perf data is grouped together into samples with similar stack backtraces. As a result, this data is converted into a visual representation to allow easier identification of computationally intensive areas of code. To generate a flamegraph using the perf tool, execute the following commands: Note : To generate flamegraphs, install the js-d3-flame-graph rpm. (BZ#1281843) /dev/random and /dev/urandom are now conditionally powered by the Kernel Crypto API DRBG In FIPS mode, the /dev/random and /dev/urandom pseudorandom number generators are powered by the Kernel Crypto API Deterministic Random Bit Generator (DRBG). Applications in FIPS mode use the mentioned devices as a FIPS-compliant noise source, therefore the devices have to employ FIPS-approved algorithms. To achieve this goal, necessary hooks have been added to the /dev/random driver. As a result, the hooks are enabled in the FIPS mode and cause /dev/random and /dev/urandom to connect to the Kernel Crypto API DRBG. (BZ#1785660) libbpf fully supported The libbpf package, crucial for bpf related applications like bpftrace and bpf/xdp development, is now fully supported. It is a mirror of bpf- linux tree bpf-/tools/lib/bpf directory plus its supporting header files. The version of the package reflects the version of the Application Binary Interface (ABI). (BZ#1759154) lshw utility now provides additional CPU information With this enhancement, the List Hardware utility ( lshw ) displays more CPU information. The CPU version field now provides the family, model and stepping details of the system processors in numeric format as version: <family>.<model>.<stepping> . ( BZ#1794049 ) kernel-rt source tree has been updated to the RHEL 8.3 tree The kernel-rt sources have been updated to use the latest Red Hat Enterprise Linux kernel source tree. The real-time patch set has also been updated to the latest upstream version, v5.6.14-rt7. Both of these updates provide a number of bug fixes and enhancements. (BZ#1818138, BZ#1818142) tpm2-tools rebased to version 4.1.1 The tpm2-tools package has been upgraded to version 4.1.1, which provides a number of command additions, updates, and removals. For more details, see the Updates to tpm2-tools package in RHEL8.3 solution. (BZ#1789682) The Mellanox ConnectX-6 Dx network adapter is now fully supported This enhancement adds the PCI IDs of the Mellanox ConnectX-6 Dx network adapter to the mlx5_core driver. On hosts that use this adapter, RHEL loads the mlx5_core driver automatically. This feature, previously available as a technology preview, is now fully supported in RHEL 8.3. (BZ#1782831) mlxsw driver rebased to version 5.7 The mlxsw driver is upgraded to upstream version 5.7 and include following new features: The shared buffer occupancy feature, which provides buffer occupancy data. The packet drop feature, which enables monitoring the layer 2 , layer 3 , tunnels and access control list drops. Packet trap policers support. Default port priority configuration support using Link Layer Discovery Protocol (LLDP) agent. Enhanced Transmission Selection (ETS) and Token Bucket Filter (TBF) queuing discipline offloading support. RED queuing discipline nodrop mode is enabled to prevent early packet drops. Traffic class SKB editing action skbedit priority feature enables changing packets metadata and it complements with pedit Traffic Class Offloading (TOS). (BZ#1821646) The crash kernel now expands memory reserve for kdump With this enhancement, the crashkernel=auto argument now reserves more memory on machines with 4GB to 64GB memory capacity. Previously, due to limited memory reserve, the crash kernel failed to capture the crash dump as the kernel space and user space memory expanded. As a consequence, the crash kernel experienced an out-of-memory (OOM) error. This update helps to reduce the OOM error occurrences in the described scenario and expands the memory capacity for kdump accordingly. (BZ#1746644) 5.1.9. File systems and storage LVM can now manage VDO volumes LVM now supports the Virtual Data Optimizer (VDO) segment type. As a result, you can now use LVM utilities to create and manage VDO volumes as native LVM logical volumes. VDO provides inline block-level deduplication, compression, and thin provisioning features. For more information, see Deduplicating and compressing logical volumes on RHEL . (BZ#1598199) The SCSI stack now works better with high-performance adapters The performance of the SCSI stack has been improved. As a result, -generation, high performance host bus adapters (HBAs) are now capable of higher IOPS (I/Os per second) on RHEL. (BZ#1761928) The megaraid_sas driver has been updated to the latest version The megaraid_sas driver has been updated to version 07.713.01.00-rc1. This update provides several bug fixes and enhancements relating to improving performance, better stability of supported MegaRAID adapters, and a richer feature set. (BZ#1791041) Stratis now lists the pool name on error When you attempt to create a Stratis pool on a block device that is already in use by an existing Stratis pool, the stratis utility now reports the name of the existing pool. Previously, the utility listed only the UUID label of the pool. ( BZ#1734496 ) FPIN ELS frame notification support The lpfc Fibre Channel (FC) driver now supports Fabric Performance Impact Notifications (FPINs) regarding link integrity, which help identify link level issues and allows the switch to choose a more reliable path. (BZ#1796565) New commands to debug LVM on-disk metadata The pvck utility, which is available from the lvm2 package, now provides low-level commands to debug or rescue LVM on-disk metadata on physical volumes: To extract metadata, use the pvck --dump command. To repair metadata, use the pvck --repair command. For more information, see the pvck(8) man page. (BZ#1541165) LVM RAID supports DM integrity to prevent data loss due to corrupted data on a device It is now possible to add Device Mapper (DM) integrity to an LVM RAID configuration to prevent data loss. The integrity layer detects data corruption on a device and alerts the RAID layer to fix the corrupted data across the LVM RAID. While RAID prevents data loss due to device failure, adding integrity to an LVM RAID array prevents data loss due to corrupted data on a device. You can add the integrity layer when you create a new LVM RAID, or you can add it to an LVM RAID that already exists. (JIRA:RHELPLAN-39320) Resilient Storage (GFS2) supported on AWS, Azure, and Aliyun public clouds Resilient Storage (GFS2) is now supported on three major public clouds, Amazon (AWS), Microsoft (Azure) and Alibaba (Aliyun) with the introduction of shared block device support on those platforms. As a result GFS2 is now a true hybrid cloud cluster filesystem with options to use both on premises and in the public cloud. For information on configuring shared block storage on Microsoft Azure and on AWS, see Deploying RHEL 8 on Microsoft Azure and Deploying RHEL 8 on Amazon Web Services . For information on configuring shared block storage on Alibaba Cloud, see Configuring Shared Block Storage for a Red Hat High Availability Cluster on Alibaba Cloud . ( BZ#1900019 ) Userspace now supports the latest nfsdcld daemon Userspace now supports the lastest nfsdcld daemon, which is the only namespace-aware client tracking method. This enhancement ensures client open or lock recovery from the containerized knfsd daemon without any data corruption. ( BZ#1817756 ) nconnect now supports multiple concurrent connections With this enhancement, you can use the nconnect functionality to create multiple concurrent connections to an NFS server, allowing for a different load balancing ability. Enable the nconnect functionality with the nconnect=X NFS mount option, where X is the number of concurrent connections to use. The current limit is 16. (BZ#1683394, BZ#1761352) nfsdcld daemon for client information tracking is now supported With this enhancement, the nfsdcld daemon is now the default method in tracking per-client information on a stable storage. As a result, the NFS v4 running in containers allows the clients to reclaim the opens or locks after a server restart. (BZ#1817752) 5.1.10. High availability and clusters pacemaker rebased to version 2.0.4 The Pacemaker cluster resource manager has been upgraded to upstream version 2.0.4, which provides a number of bug fixes. ( BZ#1828488 ) New priority-fencing-delay cluster property Pacemaker now supports the new priority-fencing-delay cluster property, which allows you to configure a two-node cluster so that in a split-brain situation the node with the fewest resources running is the node that gets fenced. The priority-fencing-delay property can be set to a time duration. The default value for this property is 0 (disabled). If this property is set to a non-zero value, and the priority meta-attribute is configured for at least one resource, then in a split-brain situation the node with the highest combined priority of all resources running on it will be more likely to survive. For example, if you set pcs resource defaults priority=1 and pcs property set priority-fencing-delay=15s and no other priorities are set, then the node running the most resources will be more likely to survive because the other node will wait 15 seconds before initiating fencing. If a particular resource is more important than the rest, you can give it a higher priority. The node running the master role of a promotable clone will get an extra 1 point if a priority has been configured for that clone. Any delay set with priority-fencing-delay will be added to any delay from the pcmk_delay_base and pcmk_delay_max fence device properties. This behavior allows some delay when both nodes have equal priority, or both nodes need to be fenced for some reason other than node loss (for example, on-fail=fencing is set for a resource monitor operation). If used in combination, it is recommended that you set the priority-fencing-delay property to a value that is significantly greater than the maximum delay from pcmk_delay_base and pcmk_delay_max , to be sure the prioritized node is preferred (twice the value would be completely safe). ( BZ#1784601 ) New commands for managing multiple sets of resource and operation defaults It is now possible to create, list, change and delete multiple sets of resource and operation defaults. When you create a set of default values, you can specify a rule that contains resource and op expressions. This allows you, for example, to configure a default resource value for all resources of a particular type. Commands that list existing default values now include multiple sets of defaults in their output. The pcs resource [op] defaults set create command creates a new set of default values. When specifying rules with this command, only resource and op expressions, including and , or and parentheses, are allowed. The pcs resource [op] defaults set delete | remove command removes sets of default values. The pcs resource [op] defaults set update command changes the default values in a set. (BZ#1817547) Support for tagging cluster resources It is now possible to tag cluster resources in a Pacemaker cluster with the pcs tag command. This feature allows you to administer a specified set of resources with a single command. You can also use the pcs tag command to remove or modify a resource tag, and to display the tag configuration. The pcs resource enable , pcs resource disable , pcs resource manage , and pcs resource unmanage commands accept tag IDs as arguments. ( BZ#1684676 ) Pacemaker now supports recovery by demoting a promoted resource rather than fully stopping it It is now possible to configure a promotable resource in a Pacemaker cluster so that when a promote or monitor action fails for that resource, or the partition in which the resource is running loses quorum, the resource will be demoted but will not be fully stopped. This feature can be useful when you would prefer that the resource continue to be available in the unpromoted mode. For example, if a database master's partition loses quorum, you might prefer that the database resource lose the Master role, but stay alive in read-only mode so applications that only need to read can continue to work despite the lost quorum. This feature can also be useful when a successful demote is both sufficient for recovery and much faster than a full restart. To support this feature: The on-fail operation meta-attribute now accepts a demote value when used with promote actions, as in the following example: The on-fail operation meta-attribute now accepts a demote value when used with monitor actions with both interval set to a nonzero value and role set to Master , as in the following example: The no-quorum-policy cluster property now accepts a demote value. When set, if a cluster partition loses quorum, any promoted resources will be demoted but left running and all other resources will be stopped. Specifying a demote meta-attribute for an operation does not affect how promotion of a resource is determined. If the affected node still has the highest promotion score, it will be selected to be promoted again. (BZ#1837747, BZ#1843079 ) New SBD_SYNC_RESOURCE_STARTUP SBD configuration parameter to improve synchronization with Pacemaker To better control synchronization between SBD and Pacemaker, the /etc/sysconfig/sbd file now supports the SBD_SYNC_RESOURCE_STARTUP parameter. When Pacemaker and SBD packages from RHEL 8.3 or later are installed and SBD is configured with SBD_SYNC_RESOURCE_STARTUP=true , SBD contacts the Pacemaker daemon for information about the daemon's state. In this configuration, the Pacemaker daemon will wait until it has been contacted by SBD, both before starting its subdaemons and before final exit. As a result, Pacemaker will not run resources if SBD cannot actively communicate with it, and Pacemaker will not exit until it has reported a graceful shutdown to SBD. This prevents the unlikely situation that might occur during a graceful shutdown when SBD fails to detect the brief moment when no resources are running before Pacemaker finally disconnects, which would trigger an unneeded reboot. Detecting a graceful shutdown using a defined handshake works in maintenance mode as well. The method of detecting a graceful shutdown on the basis of no running resources left had to be disabled in maintenance mode since running resources would not be touched on shutdown. In addition, enabling this feature avoids the risk of a split-brain situation in a cluster when SBD and Pacemaker both start successfully but SBD is unable to contact pacemaker. This could happen, for example, due to SELinux policies. In this situation, Pacemaker would assume that SBD is functioning when it is not. With this new feature enabled, Pacemaker will not complete startup until SBD has contacted it. Another advantage of this new feature is that when it is enabled SBD will contact Pacemaker repeatedly, using a heartbeat, and it is able to panic the node if Pacemaker stops responding at any time. Note If you have edited your /etc/sysconfig/sbd file or configured SBD through PCS, then an RPM upgrade will not pull in the new SBD_SYNC_RESOURCE_STARTUP parameter. In these cases, to implement this feature you must manually add it from the /etc/sysconfig/sbd.rpmnew file or follow the procedure described in the Configuration via environment section of the sbd (8) man page. ( BZ#1718324 , BZ#1743726 ) 5.1.11. Dynamic programming languages, web and database servers A new module stream: ruby:2.7 RHEL 8.3 introduces Ruby 2.7.1 in a new ruby:2.7 module stream. This version provides a number of performance improvements, bug and security fixes, and new features over Ruby 2.6 distributed with RHEL 8.1. Notable enhancements include: A new Compaction Garbage Collector (GC) has been introduced. This GC can defragment a fragmented memory space. Ruby yet Another Compiler-Compiler (Racc) now provides a command-line interface for the one-token Look-Ahead Left-to-Right - LALR(1) - parser generator. Interactive Ruby Shell ( irb ), the bundled Read-Eval-Print Loop (REPL) environment, now supports multi-line editing. Pattern matching, frequently used in functional programming languages, has been introduced as an experimental feature. Numbered parameter as the default block parameter has been introduced as an experimental feature. The following performance improvements have been implemented: Fiber cache strategy has been changed to accelerate fiber creation. Performance of the CGI.escapeHTML method has been improved. Performance of the Monitor class and MonitorMixin module has been improved. In addition, automatic conversion of keyword arguments and positional arguments has been deprecated. In Ruby 3.0, positional arguments and keyword arguments will be separated. For more information, see the upstream documentation . To suppress warnings against experimental features, use the -W:no-experimental command-line option. To disable a deprecation warning, use the -W:no-deprecated command-line option or add Warning[:deprecated] = false to your code. To install the ruby:2.7 module stream, use: If you want to upgrade from the ruby:2.6 stream, see Switching to a later stream . (BZ#1817135) A new module stream: nodejs:14 A new module stream, nodejs:14 , is now available. Node.js 14 , included in RHEL 8.3, provides numerous new features and bug and security fixes over Node.js 12 distributed in RHEL 8.1. Notable changes include: The V8 engine has been upgraded to version 8.3. A new experimental WebAssembly System Interface (WASI) has been implemented. A new experimental Async Local Storage API has been introduced. The diagnostic report feature is now stable. The streams APIs have been hardened. Experimental modules warnings have been removed. With the release of the RHEA-2020:5101 advisory, RHEL 8 provides Node.js 14.15.0 , which is the most recent Long Term Support (LTS) version with improved stability. To install the nodejs:14 module stream, use: If you want to upgrade from the nodejs:12 stream, see Switching to a later stream . (BZ#1815402, BZ#1891809 ) git rebased to version 2.27 The git packages have been upgraded to upstream version 2.27. Notable changes over the previously available version 2.18 include: The git checkout command has been split into two separate commands: git switch for managing branches git restore for managing changes within the directory tree The behavior of the git rebase command is now based on the merge workflow by default rather than the patch+apply workflow. To preserve the behavior, set the rebase.backend configuration variable to apply . The git difftool command can now be used also outside a repository. Four new configuration variables, {author,committer}.{name,email} , have been introduced to override user.{name,email} in more specific cases. Several new options have been added that enable users to configure SSL for communication with proxies. Handling of commits with log messages in non-UTF-8 character encoding has been improved in the git fast-export and git fast-import utilities. The lfs extension has been added as a new git-lfs package. Git Large File Storage (LFS) replaces large files with text pointers inside Git and stores the file contents on a remote server. ( BZ#1825114 , BZ#1783391) Changes in Python RHEL 8.3 introduces the following changes to the python38:3.8 module stream: The Python interpreter has been updated to version 3.8.3, which provides several bug fixes. The python38-pip package has been updated to version 19.3.1, and pip now supports installing manylinux2014 wheels. Performance of the Python 3.6 interpreter, provided by the python3 packages, has been significantly improved. The ubi8/python-27 , ubi8/python-36 , and ubi8/python-38 container images now support installing the pipenv utility from a custom package index or a PyPI mirror if provided by the customer. Previously, pipenv could only be downloaded from the upstream PyPI repository, and if the upstream repository was unavailable, the installation failed. ( BZ#1847416 , BZ#1724996 , BZ#1827623, BZ#1841001 ) A new module stream: php:7.4 RHEL 8.3 introduces PHP 7.4 , which provides a number of bug fixes and enhancements over version 7.3. This release introduces a new experimental extension, Foreign Function Interface (FFI), which enables you to call native functions, access native variables, and create and access data structures defined in C libraries. The FFI extension is available in the php-ffi package. The following extensions have been removed: The wddx extension, removed from php-xml package The recode extension, removed from the php-recode package. To install the php:7.4 module stream, use: If you want to upgrade from the php:7.3 stream, see Switching to a later stream . For details regarding PHP usage on RHEL 8, see Using the PHP scripting language . ( BZ#1797661 ) A new module stream: nginx:1.18 The nginx 1.18 web and proxy server, which provides a number of bug fixes, security fixes, new features and enhancements over version 1.16, is now available. Notable changes include: Enhancements to HTTP request rate and connection limiting have been implemented. For example, the limit_rate and limit_rate_after directives now support variables, including new USDlimit_req_status and USDlimit_conn_status variables. In addition, dry-run mode has been added for the limit_conn_dry_run and limit_req_dry_run directives. A new auth_delay directive has been added, which enables delayed processing of unauthorized requests. The following directives now support variables: grpc_pass , proxy_upload_rate , and proxy_download_rate . Additional PROXY protocol variables have been added, namely USDproxy_protocol_server_addr and USDproxy_protocol_server_port . To install the nginx:1.18 stream, use: If you want to upgrade from the nginx:1.16 stream, see Switching to a later stream . ( BZ#1826632 ) A new module stream: perl:5.30 RHEL 8.3 introduces Perl 5.30 , which provides a number of bug fixes and enhancements over the previously released Perl 5.26 . The new version also deprecates or removes certain language features. Notable changes with significant impact include: The Math::BigInt::CalcEmu , arybase , and B::Debug modules have been removed File descriptors are now opened with a close-on-exec flag Opening the same symbol as a file and as a directory handle is no longer allowed Subroutine attributes now must precede subroutine signatures The :locked and :uniq attributes have been removed Comma-less variable lists in formats are no longer allowed A bare << here-document operator is no longer allowed Certain formerly deprecated uses of an unescaped left brace ( { ) character in regular expression patterns are no longer permitted The AUTOLOAD() subroutine can no longer be inherited to non-method functions The sort pragma no longer allows specifying a sort algorithm The B::OP::terse() subroutine has been replaced by the B::Concise::b_terse() subroutine The File::Glob::glob() function has been replaced by the File::Glob::bsd_glob() function The dump() function now must be invoked fully qualified as CORE::dump() The yada-yada operator ( ... ) is a statement now, it cannot be used as an expression Assigning a non-zero value to the USD[ variable now returns a fatal error The USD* and USD# variables are no longer allowed Declaring variables using the my() function in a false condition branch is no longer allowed Using the sysread() and syswrite() functions on the :utf8 handles now returns a fatal error The pack() function no longer returns malformed UTF-8 format Unicode code points with a value greater than IV_MAX are no longer allowed Unicode 12.1 is now supported To upgrade from an earlier perl module stream, see Switching to a later stream . Perl 5.30 is also available as an s2i-enabled ubi8/perl-530 container image. ( BZ#1713592 , BZ#1732828 ) A new module stream: perl-libwww-perl:6.34 RHEL 8.3 introduces a new perl-libwww-perl:6.34 module stream, which provides the perl-libwww-perl package for all versions of Perl available in RHEL 8. The non-modular perl-libwww-perl package, available since RHEL 8.0, which cannot be used with other Perl streams than 5.26, has been obsoleted by the new default perl-libwww-perl:6.34 stream. ( BZ#1781177 ) A new module stream: perl-IO-Socket-SSL:2.066 A new perl-IO-Socket-SSL:2.066 module stream is now available. This module provides the perl-IO-Socket-SSL and perl-Net-SSLeay packages and it is compatible with all Perl streams available in RHEL 8. ( BZ#1824222 ) The squid:4 module stream rebased to version 4.11 The Squid proxy server, provided by the squid:4 module stream, has been upgraded from version 4.4 to version 4.11. This release provides multiple bug and security fixes, and various enhancements, such as new configuration options. (BZ#1829467) Changes in the httpd:2.4 module stream RHEL 8.3 introduces the following notable changes to the Apache HTTP Server, available through the httpd:2.4 module stream: The mod_http2 module rebased to version 1.15.7 Configuration changes in the H2Upgrade and H2Push directives A new H2Padding configuration directive to control padding of the HTTP/2 payload frames Numerous bug fixes. ( BZ#1814236 ) Support for logging to journald from the CustomLog directive in httpd It is now possible to output access (transfer) logs to journald from the Apache HTTP Server by using a new option for the CustomLog directive. The supported syntax is as follows: where priority is any priority string up to debug as used in the LogLevel directive . For example, to log to journald using the the combined log format, use: Note that when using this option, the server performance might be lower than when logging directly to flat files. ( BZ#1209162 ) 5.1.12. Compilers and development tools .NET 5 is now available on RHEL .NET 5 is available on Red Hat Enterprise Linux 7, Red Hat Enterprise Linux 8, and OpenShift Container Platform. .NET 5 includes new language versions: C# 9 and F# 5.0. Significant performance improvements were made in the base libraries, GC and JIT. .NET 5 has single file applications, which allows you to distribute .NET applications as a single executable, with all dependencies included. UBI8 images for .NET 5 are available from Red Hat container registry and can be used with OpenShift. To use .NET 5, install the dotnet-sdk-5.0 package: For more information, see the .NET 5 documentation . ( BZ#1944677 ) New GCC Toolset 10 GCC Toolset 10 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The GCC compiler has been updated to version 10.2.1, which provides many bug fixes and enhancements that are available in upstream GCC. The following tools and versions are provided by GCC Toolset 10: Tool Version GCC 10.2.1 GDB 9.2 Valgrind 3.16.0 SystemTap 4.3 Dyninst 10.1.0 binutils 2.35 elfutils 0.180 dwz 0.12 make 4.2.1 strace 5.7 ltrace 0.7.91 annobin 9.29 To install GCC Toolset 10, run the following command as root: To run a tool from GCC Toolset 10: To run a shell session where tool versions from GCC Toolset 10 override system versions of these tools: For more information, see Using GCC Toolset . The GCC Toolset 10 components are available in the two container images: rhel8/gcc-toolset-10-toolchain , which includes the GCC compiler, the GDB debugger, and the make automation tool. rhel8/gcc-toolset-10-perftools , which includes the performance monitoring tools, such as SystemTap and Valgrind. To pull a container image, run the following command as root: Note that only the GCC Toolset 10 container images are now supported. Container images of earlier GCC Toolset versions are deprecated. For details regarding the container images, see Using the GCC Toolset container images . (BZ#1842656) Rust Toolset rebased to version 1.45.2 Rust Toolset has been updated to version 1.45.2. Notable changes include: The subcommand cargo tree for viewing dependencies is now included in cargo . Casting from floating point values to integers now produces a clamped cast. Previously, when a truncated floating point value was out of range for the target integer type the result was undefined behaviour of the compiler. Non-finite floating point values led to undefined behaviour as well. With this enhancement, finite values are clamped either to the minimum or the maximum range of the integer. Positive and negative infinity values are by default clamped to the maximum and minimum integer respectively, Not-a-Number(NaN) values to zero. Function-like procedural macros in expressions, patterns, and statements are now extended and stabilized. For detailed instructions regarding usage, see Using Rust Toolset . (BZ#1820593) LLVM Toolset rebased to version 10.0.1 LLVM Toolset has been upgraded to version 10.0.1. With this update, the clang-libs packages no longer include individual component libraries. As a result, it is no longer possible to link applications against them. To link applications against the clang libraries, use the libclang-cpp.so package. For more information, see Using LLVM Toolset . (BZ#1820587) Go Toolset rebased to version 1.14.7 Go Toolset has been upgraded to version 1.14.7 Notable changes include: The Go module system is now fully supported. SSL version 3.0 (SSLv3) is no longer supported. Notable Delve debugger enhancements include: The new command examinemem (or x ) for examining raw memory The new command display for printing values of an expression during each stop of the program The new --tty flag for supplying a Teletypewriter (TTY) for the debugged program The new coredump support for Arm64 The new ability to print goroutine labels The release of the Debug Adapter Protocol (DAP) server The improved output from dlv trace and trace REPL (read-eval-print-loop) commands For more information on Go Toolset, see Using Go Toolset . For more information on Delve, see the upstream Delve documentation . (BZ#1820596) SystemTap rebased to version 4.3 The SystemTap instrumentation tool has been updated to version 4.3, which provides multiple bug fixes and enhancements. Notable changes include: Userspace probes can be targeted by hexadecimal buildid from readelf -n . This alternative to a path name enables matching binaries to be probed under any name, and thus allows a single script to target a range of different versions. This feature works well in conjunction with the elfutils debuginfod server. Script functions can use probe USDcontext variables to access variables in the probed location, which allows the SystemTap scripts to use common logic to work with a variety of probes. The stapbpf program improvements, including try-catch statements, and error probes, have been made to enable proper error tolerance in scripts running on the BPF backend. For further information about notable changes, read the upstream release notes before updating. ( BZ#1804319 ) Valgrind rebased to version 3.16.0 The Valgrind executable code analysis tool has been updated to version 3.16.0, which provides a number of bug fixes and enhancements over the version: It is now possible to dynamically change the value of many command-line options while your program is running under Valgrind: through vgdb , through a gdb connected to the Valgrind gdbserver, or through program client requests. To get a list of dynamically changeable options, run the valgrind --help-dyn-options command. For the Cachegrind ( cg_annotate ) and Callgrind ( callgrind_annotate ) tools the --auto and --show-percs options now default to yes . The Memcheck tool produces fewer false positive errors on optimized code. In particular, Memcheck now better handles the case when the compiler transformed an A && B check into B && A , where B could be undefined and A was false. Memcheck also better handles integer equality checks and non-equality checks on partially defined values. The experimental Stack and Global Array Checking tool ( exp-sgcheck ) has been removed. An alternative for detecting stack and global array overruns is using the AddressSanitizer (ASAN) facility of GCC, which requires you to rebuild your code with the -fsanitize=address option. ( BZ#1804324 ) elfutils rebased to version 0.180 The elfutils package has been updated to version 0.180, which provides multiple bug fixes and enhancements. Notable changes include: Better support for debug info for code built with GCC LTO (link time optimization). The eu-readelf and libdw utilities now can read and handle .gnu.debuglto_ sections, and correctly resolve file names for functions that are defined across CUs (compile units). The eu-nm utility now explicitly identifies weak objects as V and common symbols as C . The debuginfod server can now index .deb archives and has a generic extension to add other package archive formats using the -Z EXT[=CMD] option. For example -Z '.tar.zst=zstdcat' indicates that archives ending with the .tar.zst extension should be unpacked using the zstdcat utility. The debuginfo-client tool has several new helper functions, such as debuginfod_set_user_data , debuginfod_get_user_data , debuginfod_get_url and debuginfod_add_http_header . It also supports file:// URLs now. ( BZ#1804321 ) GDB now supports process record and replay on IBM z15 With this enhancement, the GNU Debugger (GDB) now supports process record and replay with most of the new instructions of the IBM z15 processor (previously known as arch13). Note that the following instructions are currently not supported: SORTL (sort lists), DFLTCC (deflate conversion call), KDSA (compute digital signature authentication). (BZ#1659535) Marvell ThunderX2 performance monitoring events have been updated in papi With this enhancement, a number of performance events specific to ThunderX2, including uncore events, have been updated. As a result, developers can better investigate system performance on Marvell ThunderX2 systems. (BZ#1726070) The glibc math library is now optimized for IBM Z With this enhancement, the libm math functions were optimized to improve performance on IBM Z machines. Notable changes include: improved rounding mode handling to avoid superfluous floating point control register sets and extracts exploitation of conversion between z196 integer and float (BZ#1780204) An additional libffi-specific temporary directory is available now Previously on hardened systems, the system-wide temporary directories may not have had permissions suitable for use with the libffi library. With this enhancement, system administrators can now set the LIBFFI_TMPDIR environment variable to point to a libffi-specific temporary directory with both write and exec mount or selinux permissions. ( BZ#1723951 ) Improved performance of strstr() and strcasestr() With this update, the performance of the strstr() and strcasestr() functions has been improved across several supported architectures. As a result, users now benefit from significantly better performance of all applications using string and memory manipulation routines. (BZ#1821531) glibc now handles loading of a truncated locale archive correctly If the archive of system locales has been previously truncated, either due to a power outage during upgrade or a disk failure, a process could terminate unexpectedly when loading the archive. This enhancement adds additional consistency checks to the loading of the locale archive. As a result, processes are now able to detect archive truncation and fall back to either non-archive installed locales or the default POSIX locale. (BZ#1784525) GDB now supports debuginfod With this enhancement, the GNU Debugger (GDB) can now download debug information packages from centralized servers on demand using the elfutils debuginfod client library. ( BZ#1838777 ) pcp rebased to version 5.1.1-3 The pcp package has been upgraded to version 5.1.1-3. Notable changes include: Updated service units and improved systemd integration and reliability for all the PCP services. Improved archive log rotation and more timely compression. Archived discovery bug fixes in the pmproxy protocol. Improved pcp-atop , pcp-dstat , pmrep , and related monitor tools along with metric labels reporting in the pmrep and export tools. Improved bpftrace , OpenMetrics , MMV, the Linux kernel agent, and other collection agents. New metric collectors for the Open vSwitch and RabbitMQ servers. New host discovery pmfind systemd service, which replaces the standalone pmmgr daemon. ( BZ#1792971 ) grafana rebased to version 6.7.3 The grafana package has been upgraded to version 6.7.3. Notable changes include: Generic OAuth role mapping support A new logs panel Multi-line text display in the table panel A new currency and energy units ( BZ#1807323 ) grafana-pcp rebased to version 2.0.2 The grafana-pcp package has been upgraded to version 2.0.2. Notable changes include: Supports the multidimensional eBPF maps to be graphed in the flamegraph. Removes an auto-completion cache in the query editor, so that the PCP metrics can appear dynamically. ( BZ#1807099 ) A new rhel8/pcp container image The rhel8/pcp container image is now available in the Red Hat Container Registry. The image contains the Performance Co-Pilot (PCP) toolkit, which includes preinstalled pcp-zeroconf package and the OpenMetrics PMDA. (BZ#1497296) A new rhel8/grafana container image The rhel8/grafana container image is now available in the Red Hat Container Registry. Grafana is an open source utility with metrics dashboard, and graph editor for the Graphite , Elasticsearch , OpenTSDB , Prometheus , InfluxDB , and PCP monitoring tool. ( BZ#1823834 ) 5.1.13. Identity Management IdM backup utility now checks for required replica roles The ipa-backup utility now checks if all of the services used in the IdM cluster, such as a Certificate Authority (CA), Domain Name System (DNS), and Key Recovery Agent (KRA) are installed on the replica where you are running the backup. If the replica does not have all these services installed, the ipa-backup utility exits with a warning, because backups taken on that host would not be sufficient for a full cluster restoration. For example, if your IdM deployment uses an integrated Certificate Authority (CA), a backup run on a non-CA replica will not capture CA data. Red Hat recommends verifying that the replica where you perform an ipa-backup has all of the IdM services used in the cluster installed. For more information, see Preparing for data loss with IdM backups . ( BZ#1810154 ) New password expiration notification tool Expiring Password Notification (EPN), provided by the ipa-client-epn package, is a standalone tool you can use to build a list of Identity Management (IdM) users whose passwords are expiring soon. IdM administrators can use EPN to: Display a list of affected users in JSON format, which is calculated at runtime Calculate how many emails will be sent for a given day or date range Send password expiration email notifications to users Red Hat recommends launching EPN once a day from an IdM client or replica with the included ipa-epn.timer systemd timer. (BZ#913799) JSS now provides a FIPS-compliant SSLContext Previously, Tomcat used the SSLEngine directive from the Java Cryptography Architecture (JCA) SSLContext class. The default SunJSSE implementation is not compliant with the Federal Information Processing Standard (FIPS), therefore PKI now provides a FIPS-compliant implementation via JSS. ( BZ#1821851 ) Checking the overall health of your public key infrastructure is now available With this update, the public key infrastructure (PKI) Healthcheck tool reports the health of the PKI subsystem to the Identity Management (IdM) Healthcheck tool, which was introduced in RHEL 8.1. Executing the IdM Healthcheck invokes the PKI Healthcheck, which collects and returns the health report of the PKI subsystem. The pki-healthcheck tool is available on any deployed RHEL IdM server or replica. All the checks provided by pki-healthcheck are also integrated into the ipa-healthcheck tool. ipa-healthcheck can be installed separately from the idm:DL1 module stream. Note that pki-healthcheck can also work in a standalone Red Hat Certificate System (RHCS) infrastructure. (BZ#1770322) Support for RSA PSS With this enhancement, PKI now supports the RSA PSS (Probabilistic Signature Scheme) signing algorithm. To enable this feature, set the following line in the pkispawn script file for a given subsystem: pki_use_pss_rsa_signing_algorithm=True As a result, all existing default signing algorithms for this subsystem (specified in its CS.cfg configuration file) will use the corresponding PSS version. For example, SHA256withRSA becomes SHA256withRSA/PSS ( BZ#1824948 ) Directory Server exports the private key and certificate to a private name space when the service starts Directory Server uses OpenLDAP libraries for outgoing connections, such as replication agreements. Because these libraries cannot access the network security services (NSS) database directly, Directory Server extracts the private key and certificates from the NSS database on instances with TLS encryption support to enable the OpenLDAP libraries to establish encrypted connections. Previously, Directory Server extracted the private key and certificates to the directory set in the nsslapd-certdir parameter in the cn=config entry (default: /etc/dirsrv/slapd-<instance_name>/ ). As a consequence, Directory Server stored the Server-Cert-Key.pem and Server-Cert.pem in this directory. With this enhancement, Directory Server extracts the private key and certificate to a private name space that systemd mounts to the /tmp/ directory. As a result, the security has been increased. ( BZ#1638875 ) Directory Server can now turn an instance to read-only mode if the disk monitoring threshold is reached This update adds the nsslapd-disk-monitoring-readonly-on-threshold parameter to the cn=config entry. If you enable this setting, Directory Server switches all databases to read-only if disk monitoring is enabled and the free disk space is lower than the value you configured in nsslapd-disk-monitoring-threshold . With nsslapd-disk-monitoring-readonly-on-threshold set to on , the databases cannot be modified until Directory Server successfully shuts down the instance. This can prevent data corruption. (BZ#1728943) samba rebased to version 4.12.3 The samba packages have been upgraded to upstream version 4.12.3, which provides a number of bug fixes and enhancements over the version: Built-in cryptography functions have been replaced with GnuTLS functions. This improves the server message block version 3 (SMB3) performance and copy speed significantly. The minimum runtime support is now Python 3.5. The write cache size parameter has been removed because the write cache concept could reduce the performance on memory-constrained systems. Support for authenticating connections using Kerberos tickets with DES encryption types has been removed. The vfs_netatalk virtual file system (VFS) module has been removed. The ldap ssl ads parameter is marked as deprecated and will be removed in a future Samba version. For information about how to alternatively encrypt LDAP traffic and further details, see the samba: removal of "ldap ssl ads" smb.conf option solution. By default, Samba on RHEL 8.3 no longer supports the deprecated RC4 cipher suite. If you run Samba as a domain member in an AD that still requires RC4 for Kerberos authentication, use the update-crypto-policies --set DEFAULT:AD-SUPPORT command to enable support for the RC4 encryption type. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind service starts. Back up the database files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating. ( BZ#1817557 ) cockpit-session-recording rebased to version 4 The cockpit-session-recording module has been rebased to version 4. This version provides following notable changes over the version: Updated parent id in the metainfo file. Updated package manifest. Fixed rpmmacro to resolve correct path on CentOS7. Handled byte-array encoded journal data. Moved code out of deprecated React lifecycle functions. ( BZ#1826516 ) krb5 rebased to version 1.18.2 The krb5 packages have been upgraded to upstream version 1.18.2. Notable fixes and enhancements include: Single- and triple-DES encryption types have been removed. Draft 9 PKINIT has been removed as it is not needed for any of the supported versions of Active Directory. NegoEx mechanism plug-ins are now supported. Hostname canonicalization fallback is now supported ( dns_canonicalize_hostname = fallback ). (BZ#1802334) IdM now supports new Ansible management modules This update introduces several ansible-freeipa modules for automating common Identity Management (IdM) tasks using Ansible playbooks: The config module allows setting global configuration parameters within IdM. The dnsconfig module allows modifying global DNS configuration. The dnsforwardzone module allows adding and removing DNS forwarders from IdM. The dnsrecord allows the management of DNS records. In contrast to the upstream ipa_dnsrecord , it allows multiple record management in one execution, and it supports more record types. The dnszone module allows configuring zones in the DNS server. The service module allows ensuring the presence and absence of services. The vault module allows ensuring the presence and absence of vaults and of the members of vaults. Note that the ipagroup and ipahostgroup modules have been extended to include user and host group membership managers, respectively. A group membership manager is a user or a group that can add members to a group or remove members from a group. For more information, see the Variables sections of the respective /usr/share/doc/ansible-freeipa/README-* files. (JIRA:RHELPLAN-49954) IdM now supports a new Ansible system role for certificate management Identity Management (IdM) supports a new Ansible system role for automating certificate management tasks. The new role includes the following benefits: The role helps automate the issuance and renewal of certificates. The role can be configured to have the ipa certificate authority issue your certificates. In this way, you can use your existing IdM infrastructure to manage the certificate trust chain. The role allows you to specify the commands to be executed before and after a certificate is issued, for example the stopping and starting of services. (JIRA:RHELPLAN-50002) Identity Management now supports FIPS With this enhancement, you can now use encryption types that are approved by the Federal Information Processing Standard (FIPS) with the authentication mechanisms in Identity Management (IdM). Note that a cross-forest trust between IdM and Active Directory is not FIPS compliant. Customers who require FIPS but do not require an AD trust can now install IdM in FIPS mode. (JIRA:RHELPLAN-43531) OpenDNSSEC in idm:DL1 rebased to version 2.1 The OpenDNSSEC component of the idm:DL1 module stream has been upgraded to the 2.1 version series, which is the current long term upstream support version. OpenDNSSEC is an open source project driving the adoption of Domain Name System Security Extensions (DNSSEC) to further enhance Internet security. OpenDNSSEC 2.1 provides a number of bug fixes and enhancements over the version. For more information, read the upstream release notes: https://www.opendnssec.org/archive/releases/ (JIRA:RHELPLAN-48838) IdM now supports the deprecated RC4 cipher suite with a new system-wide cryptographic subpolicy This update introduces the new AD-SUPPORT cryptographic subpolicy that enables the Rivest Cipher 4 (RC4) cipher suite in Identity Management (IdM). As an administrator in the context of IdM-Active Directory (AD) cross-forest trusts, you can activate the new AD-SUPPORT subpolicy when AD is not configured to use Advanced Encryption Standard (AES). More specifically, Red Hat recommends enabling the new subpolicy if one of the following conditions applies: The user or service accounts in AD have RC4 encryption keys and lack AES encryption keys. The trust links between individual Active Directory domains have RC4 encryption keys and lack AES encryption keys. To enable the AD-SUPPORT subpolicy in addition to the DEFAULT cryptographic policy, enter: Alternatively, to upgrade trusts between AD domains in an AD forest so that they support strong AES encryption types, see the following Microsoft article: AD DS: Security: Kerberos "Unsupported etype" error when accessing a resource in a trusted domain . (BZ#1851139) Adjusting to new Microsoft LDAP channel binding and LDAP signing requirements With recent Microsoft updates, Active Directory (AD) flags the clients that do not use the default Windows settings for LDAP channel binding and LDAP signing. As a consequence, RHEL systems that use the System Security Services Daemon (SSSD) for direct or indirect integration with AD might trigger error Event IDs in AD upon successful Simple Authentication and Security Layer (SASL) operations that use the Generic Security Services Application Program Interface (GSSAPI). To prevent these notifications, configure client applications to use the Simple and Protected GSSAPI Negotiation Mechanism (GSS-SPNEGO) SASL mechanism instead of GSSAPI. To configure SSSD, set the ldap_sasl_mech option to GSS-SPNEGO . Additionally, if channel binding is enforced on the AD side, configure any systems that use SASL with SSL/TLS in the following way: Install the latest versions of the cyrus-sasl , openldap and krb5-libs packages that are shipped with RHEL 8.3 and later. In the /etc/openldap/ldap.conf file, specify the correct channel binding type by setting the SASL_CBINDING option to tls-endpoint . For more information, see Impact of Microsoft Security Advisory ADV190023 | LDAP Channel Binding and LDAP Signing on RHEL and AD integration . ( BZ#1873567 ) SSSD, adcli, and realmd now support the deprecated RC4 cipher suite with a new system-wide cryptographic subpolicy This update introduces the new AD-SUPPORT cryptographic subpolicy that enables the Rivest Cipher 4 (RC4) cipher suite for the following utilities: the System Security Services Daemon (SSSD) adcli realmd As an administrator, you can activate the new AD-SUPPORT subpolicy when Active Directory (AD) is not configured to use Advanced Encryption Standard (AES) in the following scenarios: SSSD is used on a RHEL system connected directly to AD. adcli is used to join an AD domain or to update host attributes, for example the host key. realmd is used to join an AD domain. Red Hat recommends enabling the new subpolicy if one of the following conditions applies: The user or service accounts in AD have RC4 encryption keys and lack AES encryption keys. The trust links between individual Active Directory domains have RC4 encryption keys and lack AES encryption keys. To enable the AD-SUPPORT subpolicy in addition to the DEFAULT cryptographic policy, enter: ( BZ#1866695 ) authselect has a new minimal profile The authselect utility has a new minimal profile. You can use this profile to serve only local users and groups directly from system files instead of using other authentication providers. Therefore, you can safely remove the SSSD , winbind , and fprintd packages and can use this profile on systems that require minimal installation to save disk and memory space. (BZ#1654018) SSSD now updates Samba's secrets.tdb file when rotating a password A new ad_update_samba_machine_account_password option in the sssd.conf file is now available in RHEL. You can use it to set SSSD to automatically update the Samba secrets.tdb file when rotating a machine's domain password while using Samba. However, if SELinux is in enforcing mode, SSSD fails to update the secrets.tdb file. Consequently, Samba does not have access to the new password. To work around this problem, set SELinux to permissive mode. ( BZ#1793727 ) SSSD now enforces AD GPOs by default The default setting for the SSSD option ad_gpo_access_control is now enforcing . In RHEL 8, SSSD enforces access control rules based on Active Directory Group Policy Objects (GPOs) by default. Red Hat recommends ensuring GPOs are configured correctly in Active Directory before upgrading from RHEL 7 to RHEL 8. If you would not like to enforce GPOs, change the value of the ad_gpo_access_control option in the /etc/sssd/sssd.conf file to permissive . (JIRA:RHELPLAN-51289) Directory Server now supports the pwdReset operation attribute This enhancement adds support for the pwdReset operation attribute to Directory Server. When an administrator changes the password of a user, Directory Server sets pwdReset in the user's entry to true . As a result, applications can use this attribute to identify if a password of a user has been reset by an administrator. Note that pwdReset is an operational attribute and, therefore, users cannot edit it. ( BZ#1775285 ) Directory Server now logs the work and operation time in RESULT entries With this update, Directory Server now logs two additional time values in RESULT`entries in the `/var/log/dirsrv/slapd-<instance_name>/access file: The wtime value indicates how long it took for an operation to move from the work queue to a worker thread. The optime value shows the time the actual operation took to be completed once a worker thread started the operation. The new values provide additional information about how the Directory Server handles load and processes operations. For further details, see the Access Log Reference section in the Red Hat Directory Server Configuration, Command, and File Reference. ( BZ#1850275 ) 5.1.14. Desktop Single-application session is now available You can now start GNOME in a single-application session, also known as kiosk mode. In this session, GNOME displays only a full-screen window of an application that you have configured. To enable the single-application session: Install the gnome-session-kiosk-session package: Create and edit the USDHOME/.local/bin/redhat-kiosk file of the user that will open the single-application session. In the file, enter the executable name of the application that you want to launch. For example, to launch the Text Editor application: Make the file executable: At the GNOME login screen, select the Kiosk session from the cogwheel button menu and log in as the single-application user. (BZ#1739556) tigervnc has been rebased to version 1.10.1 The tigervnc suite has been rebased to version 1.10.1. The update contains number of fixes and improvements. Most notably: tigervnc now only supports starting of the virtual network computing (VNC) server using the systemd service manager. The clipboard now supports full Unicode in the native viewer, WinVNC and Xvnc/libvnc.so. The native client will now respect the system trust store when verifying server certificates. The Java web server has been removed. x0vncserver can now be configured to only allow local connections. x0vncserver has received fixes for when only part of the display is shared. Polling is now default in WinVNC . Compatibility with VMware's VNC server has been improved. Compatibility with some input methods on macOS has been improved. Automatic "repair" of JPEG artefacts has been improved. ( BZ#1806992 ) 5.1.15. Graphics infrastructures Support for new graphics cards The following graphics cards are now fully supported: The AMD Navi 14 family, which includes the following models: Radeon RX 5300 Radeon RX 5300 XT Radeon RX 5500 Radeon RX 5500 XT The AMD Renoir APU family, which includes the following models: Ryzen 3 4300U Ryzen 5 4500U, 4600U, and 4600H Ryzen 7 4700U, 4800U, and 4800H The AMD Dali APU family, which includes the following models: Athlon Silver 3050U Athlon Gold 3150U Ryzen 3 3250U Additionally, the following graphics drivers have been updated: The Matrox mgag200 driver (JIRA:RHELPLAN-55009) Hardware acceleration with Nvidia Volta and Turing The nouveau graphics driver now supports hardware acceleration with the Nvidia Volta and Turing GPU families. As a result, the desktop and applications that use 3D graphics now render efficiently on the GPU. Additionally, this frees the CPU for other tasks and improves the overall system responsiveness. (JIRA:RHELPLAN-57564) Reduced display tearing on XWayland The XWayland display back end now enables the XPresent extension. Using XPresent, applications can efficiently update their window content, which reduces display tearing. This feature significantly improves the user interface rendering of full-screen OpenGL applications, such as 3D editors. (JIRA:RHELPLAN-57567) Intel Tiger Lake GPUs are now supported This update adds support for the Intel Tiger Lake family of GPUs. This includes Intel UHD Graphics and Intel Xe GPUs found with the following CPU models: https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html . You no longer have to set the i915.alpha_support=1 or i915.force_probe=* kernel option to enable Tiger Lake GPU support. This enhancement was released as part of the RHSA-2021:0558 asynchronous advisory. (BZ#1882620) 5.1.16. The web console Setting privileges from within the web console session With this update the web console provides an option to switch between administrative access and limited access from inside of a user session. You can switch between the modes by clicking the Administrative access or Limited access indicator in your web console session. (JIRA:RHELPLAN-42395) Improvements to logs searching With this update, the web console introduces a search box that supports several new ways of how the users can search among logs. The search box supports regular expression searching in log messages, specifying service or searching for entries with specific log fields. ( BZ#1710731 ) Overview page shows more detailed Insights reports With this update, when a machine is connected to Red Hat Insights, the Health card in the Overview page in the web console shows more detailed information about number of hits and their priority. (JIRA:RHELPLAN-42396) 5.1.17. Red Hat Enterprise Linux system roles Terminal log role added to RHEL system roles With this enhancement, a new Terminal log (TLOG) role has been added to RHEL system roles shipped with the rhel-system-roles package. Users can now use the tlog role to setup and configure session recording using Ansible. Currently, the tlog role supports the following tasks: Configure tlog to log recording data to the systemd journal Enable session recording for explicit users and groups, via SSSD ( BZ#1822158 ) RHEL Logging system role is now available for Ansible With the Logging system role, you can deploy various logging configurations consistently on local and remote hosts. You can configure a RHEL host as a server to collect logs from many client systems. ( BZ#1677739 ) rhel-system-roles-sap fully supported The rhel-system-roles-sap package, previously available as a Technology Preview, is now fully supported. It provides Red Hat Enterprise Linux (RHEL) system roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Access is limited to RHEL for SAP Solutions offerings. Please contact Red Hat Customer Support if you need assistance with your subscription. The following new roles in the rhel-system-roles-sap package are fully supported: sap-preconfigure sap-netweaver-preconfigure sap-hana-preconfigure For more information, see Red Hat Enterprise Linux system roles for SAP . (BZ#1660832) The metrics RHEL system role is now available for Ansible. With the metrics RHEL system role, you can configure, for local and remote hosts: performance analysis services via the pcp application visualisation of this data using a grafana server querying of this data using the redis data source without having to manually configure these services separately. ( BZ#1890499 ) rhel-system-roles-sap upgraded The rhel-system-roles-sap packages have been upgraded to upstream version 2.0.0, which provides multiple bug fixes and enhancements. Notable changes include: Improve hostname configuration and checking Improve uuidd status detection and handling Add support for the --check (-c) option Increase nofile limits from 32800 to 65536 Add the nfs-utils file to sap_preconfigure_packages * Disable firewalld . With this change we disable firewalld only when it is installed. Add minimum required versions of the setup package for RHEL 8.0 and RHEL 8.1. Improve the tmpfiles.d/sap.conf file handling Support single step execution or checking of SAP notes Add the required compat-sap-c++ packages Improve minimum package installation handling Detect if a reboot is required after applying the RHEL system roles Support setting any SElinux state. Default state is "disabled" No longer fail if there is more than one line with identical IP addresses No longer modify /etc/hosts if there is more than one line containing sap_ip Support for HANA on RHEL 7.7 Support for adding a repository for the IBM service and productivity tools for Power, required for SAP HANA on the ppc64le platform ( BZ#1844190 ) The storage RHEL system role now supports file system management With this enhancement, administrators can use the storage RHEL system role to: resize an ext4 file resize a LVM file create a swap partition, if it does not exist, or to modify the swap partition, if it already exists, on a block device using the default parameters. (BZ#1959289) 5.1.18. Virtualization Migrating a virtual machine to a host with incompatible TSC setting now fails faster Previously, migrating a virtual machine to a host with incompatible Time Stamp Counter (TSC) setting failed late in the process. With this update, attempting such a migration generates an error before the migration process starts. (JIRA:RHELPLAN-45950) Virtualization support for 2nd generation AMD EPYC processors With this update, virtualization on RHEL 8 adds support for the 2nd generation AMD EPYC processors, also known as EPYC Rome. As a result, virtual machines hosted on RHEL 8 can now use the EPYC-Rome CPU model and utilise new features that the processors provide. (JIRA:RHELPLAN-45959) New command: virsh iothreadset This update introduces the virsh iothreadset command, which can be used to configure dynamic IOThread polling. This makes it possible to set up virtual machines with lower latencies for I/O-intensive workloads at the expense of greater CPU consumption for the IOThread. For specific options, see the virsh man page. (JIRA:RHELPLAN-45958) UMIP is now supported by KVM on 10th generation Intel Core processors With this update, the User-mode Instruction Prevention (UMIP) feature is now supported by KVM for hosts running on 10th generation Intel Core processors, also known as Ice Lake Servers. The UMIP feature issues a general protection exception if certain instructions, such as sgdt , sidt , sldt , smsw , and str , are executed when the Current Privilege Level (CPL) is greater than 0. As a result, UMIP ensures system security by preventing unauthorized applications from accessing certain system-wide settings which can be used to initiate privilege escalation attacks. (JIRA:RHELPLAN-45957) The libvirt library now supports Memory Bandwidth Allocation libvirt now supports Memory Bandwidth Allocation (MBA). With MBA, you can allocate parts of host memory bandwidth in vCPU threads by using the <memorytune> element in the <cputune> section. MBA is an extension of the existing Cache QoS Enforcement (CQE) feature found in the Intel Xeon v4 processors, also known as Broadwell server. For tasks that are associated with the CPU affinity, the mechanism used by MBA is the same as in CQE. (JIRA:RHELPLAN-45956) RHEL 6 virtual machines now support the Q35 machine type Virtual machines (VMs) hosted on RHEL 8 that use RHEL 6 as their guest OS can now use Q35, a more modern PCI Express-based machine type. This provides a variety of improvements in features and performance of virtual devices, and ensures that a wider range of modern devices are compatible with RHEL 6 VMs. (JIRA:RHELPLAN-45952) All logged QEMU events now have a time stamp. As a result, users can more easily troubleshoot their virtual machines using logs saved in the /var/log/libvirt/qemu/ directory. QEMU logs now include time stamps for spice-server events This update adds time stamps to`spice-server` event logs. Therefore, all logged QEMU events now have a time stamp. As a result, users can more easily troubleshoot their virtual machines using logs saved in the /var/log/libvirt/qemu/ directory. (JIRA:RHELPLAN-45945) The bochs-display device is now supported RHEL 8.3 and later introduce the Bochs display device, which is more secure than the currently used stdvga device. Note that all virtual machines (VMs) compatible with bochs-display will use it by default. This mainly includes VMs that use the UEFI interface. (JIRA:RHELPLAN-45939) Optimized MDS protection for virtual machines With this update, a RHEL 8 host can inform its virtual machines (VMs) whether they are vulnerable to Microarchitectural Data Sampling (MDS). VMs that are not vulnerable do not use measures against MDS, which improves their performance. (JIRA:RHELPLAN-45937) Creating QCOW2 disk images on RBD now supported With this update, it is possible to create QCOW2 disk images on RADOS Block Device (RBD) storage. As a result, virtual machines can use RBD servers for their storage back ends with QCOW2 images. Note, however, that the write performance of QCOW2 disk images on RBD storage is currently lower than intended. (JIRA:RHELPLAN-45936) Maximum supported VFIO devices increased to 64 With this update, you can attach up to 64 PCI devices that use VFIO to a single virtual machine on a RHEL 8 host. This is up from 32 in RHEL 8.2 and prior. (JIRA:RHELPLAN-45930) discard and write-zeroes commands are now supported in QEMU/KVM With this update, the discard and write-zeroes commands for virtio-blk are now supported in QEMU/KVM. As a result, virtual machines can use the virtio-blk device to discard unused sectors of an SSD, fill sectors with zeroes when they are emptied, or both. This can be used to increase SSD performance or to ensure that a drive is securely erased. (JIRA:RHELPLAN-45926) RHEL 8 now supports IBM POWER 9 XIVE This update introduces support for the External Interrupt Virtualization Engine (XIVE) feature of IBM POWER9 to RHEL 8. As a result, virtual machines (VMs) running on a RHEL 8 hypervisor on an IBM POWER 9 system can use XIVE, which improves the performance of I/O-intensive VMs. (JIRA:RHELPLAN-45922) Control Group v2 support for virtual machines With this update, the libvirt suite supports control groups v2. As a result, virtual machines hosted on RHEL 8 can take advantage of resource control capabilities of control group v2. (JIRA:RHELPLAN-45920) Paravirtualized IPIs are now supported for Windows virtual machines With this update, the hv_ipi flag has been added to the supported hypervisor enlightenments for Windows virtual machines (VMs). This allows inter-processor interrupts (IPIs) to be sent via a hypercall. As a result, IPIs can be performed faster on VMs running a Windows OS. (JIRA:RHELPLAN-45918) Migrating virtual machines with enabled disk cache is now possible This update makes the RHEL 8 KVM hypervisor compatible with disk cache live migration. As a result, it is now possible to live-migrate virtual machines with disk cache enabled. (JIRA:RHELPLAN-45916) macvtap interfaces can now be used by virtual machines in non-privileged sessions It is now possible for virtual machines (VMs) to use a macvtap interface previously created by a privileged process. Notably, this enables VMs started by the non-privileged user session of libvirtd to use a macvtap interface. To do so, first create a macvtap interface in a privileged environment and set it to be owned by the user who will be running libvirtd in a non-privileged session. You can do this using a management application such as the web console, or using command-line utilities as root, for example: Afterwards, modify the <target> sub-element of the VM's <interface> configuration to reference the newly created macvtap interface: With this configuration, if libvirtd is run as the user myuser , the VM will use the existing macvtap interface when started. (JIRA:RHELPLAN-45915) Virtual machines can now use features of 10th generation Intel Core processors The Icelake-Server and Icelake-Client CPU model names are now available for virtual machines (VMs). On hosts with 10th generation Intel Core processors, using Icelake-Server or Icelake-Client as the CPU type in the XML configuration of a VM makes new features of these CPUs exposed to the VM. (JIRA:RHELPLAN-45911) QEMU now supports LUKS encryption With this update, it is possible to create virtual disks using Linux Unified Key Setup (LUKS) encryption. You can encrypt the disks when creating the storage volume by including the <encryption> field in the virtual machine's (VM) XML configuration. You can also make the LUKS encrypted virtual disk completely transparent to the VM by including the <encryption> field in the disk's domain definition in the XML configuration file. (JIRA:RHELPLAN-45910) Improved logs for nbdkit The nbdkit service logging has been modified to be less verbose. As a result, nbdkit logs only potentially important messages, and the logs created during virt-v2v conversions are shorter and easier to parse. (JIRA:RHELPLAN-45909) Improved consistency for virtual machines SELinux security labels and permissions With this update, the libvirt service can record SELinux security labels and permissions associated with files, and restore the labels after modifying the files. As a result, for example, using libguestfs utilities to modify a virtual machine (VM) disk image owned by a specific user no longer changes the image owner to root. Note that this feature does not work on file systems that do not support extended file attributes, such as NFS. (JIRA:RHELPLAN-45908) QEMU now uses the gcrypt library for XTS ciphers With this update, the QEMU emulator has been changed to use the XTS cipher mode implementation provided by the gcrypt library. This improves the I/O performance of virtual machines whose host storage uses QEMU's native luks encryption driver. (JIRA:RHELPLAN-45904) Windows Virtio drivers can now be updated using Windows Updates With this update, a new standard SMBIOS string is initiated by default when QEMU starts. The parameters provided in the SMBIOS fields make it possible to generate IDs for the virtual hardware running on the virtual machine(VM). As a result, Windows Update can identify the virtual hardware and the RHEL hypervisor machine type, and update the Virtio drivers on VMs running Windows 10+, Windows Server 2016, and Windows Server 2019+. (JIRA:RHELPLAN-45901) New command: virsh guestinfo The virsh guestinfo command has been introduced to RHEL 8.3. This makes it possible to report the following types of information about a virtual machine (VM): Guest OS and file system information Active users The time zone used Before running virsh guestinfo , ensure that the qemu-guest-agent package is installed. In addition, the guest_agent channel must be enabled in the VM's XML configuration, for example as follows: (JIRA:RHELPLAN-45900) VNNI for BFLOAT16 inputs are now supported by KVM With this update, Vector Neural Network Instructions (VNNI) supporting BFLOAT16 inputs, also known as AVX512_BF16 instructions, are now supported by KVM for hosts running on the 3rd Gen Intel Xeon scalable processors, also known as Cooper Lake. As a result, guest software can now use the AVX512_BF16 instructions inside virtual machines, by enabling it in the virtual CPU configuration. (JIRA:RHELPLAN-45899) New command: virsh pool-capabilities RHEL 8.3 introduces the virsh pool-capabilities command option. This command displays information that can be used for creating storage pools, as well as storage volumes within each pool, on your host. This includes: Storage pool types Storage pool source formats Target storage volume format types (JIRA:RHELPLAN-45884) Support for CPUID.1F in virtual machines with Intel Xeon Platinum 9200 series processors With this update, virtual machines hosted on RHEL 8 can be configured with a virtual CPU topology of multiple dies, using the Extended Topology Enumeration leaf feature (CPUID.1F). This feature is supported by Intel Xeon Platinum 9200 series processors, previously known as Cascade Lake. As a result, it is now possible on hosts that use Intel Xeon Platinum 9200 series processors to create a vCPU topology that mirrors the physical CPU topology of the host. (JIRA:RHELPLAN-37573, JIRA:RHELPLAN-45934) Virtual machines can now use features of 3rd Generation Intel Xeon Scalable Processors The Cooperlake CPU model name is now available for virtual machines (VMs). Using Cooperlake as the CPU type in the XML configuration of a VM makes new features from the 3rd Generation Intel Xeon Scalable Processors exposed to the VM, if the host uses this CPU. (JIRA:RHELPLAN-37570) Intel Optane persistent memory now supported by KVM With this update, virtual machines hosted on RHEL 8 can benefit from the Intel Optane persistent memory technology, previously known as Intel Crystal Ridge. Intel Optane persistent memory storage devices provide data center-class persistent memory technology, which can significantly increase transaction throughput. (JIRA:RHELPLAN-14068) Virtual machines can now use Intel Processor Trace With this update, virtual machines (VMs) hosted on RHEL 8 are able to use the Intel Processor Trace (PT) feature. When your host uses a CPU that supports Intel PT, you can use specialized Intel software to collect a variety of metrics about the performance of your VM's CPU. Note that this also requires enabling the intel-pt feature in the XML configuration of the VM. (JIRA:RHELPLAN-7788) DASD devices can now be assigned to virtual machines on IBM Z Direct-access storage devices (DASDs) provide a number of specific storage features. Using the vfio-ccw feature, you can assign DASDs as mediated devices to your virtual machines (VMs) on IBM Z hosts. This for example makes it possible for the VM to access a z/OS dataset, or to share the assigned DASDs with a z/OS machine. (JIRA:RHELPLAN-40234) IBM Secure Execution supported for IBM Z When using IBM Z hardware to run your RHEL 8 host, you can improve the security of your virtual machines (VMs) by configuring IBM Secure Execution for the VMs. IBM Secure Execution, also known as Protected Virtualization, prevents the host system from accessing a VM's state and memory contents. As a result, even if the host is compromised, it cannot be used as a vector for attacking the guest operating system. In addition, Secure Execution can be used to prevent untrusted hosts from obtaining sensitive information from the VM. (JIRA:RHELPLAN-14754) 5.1.19. RHEL in cloud environments cloud-utils-growpart rebased to 0.31 The cloud-utils-growpart package has been upgraded to version 0.31, which provides multiple bug fixes and enhancements. Notable changes include: A bug that prevented GPT disks from being grown past 2TB has been fixed. The growpart operation no longer fails when the start sector and size are the same. Resizing a partition using the sgdisk utility previously in some cases failed. This problem has now been fixed. ( BZ#1846246 ) 5.1.20. Containers skopeo container image is now available The registry.redhat.io/rhel8/skopeo container image is a containerized implementation of the skopeo package. The skopeo tool is a command-line utility that performs various operations on container images and image repositories. This container image allows you to inspect container images in a registry, to remove a container image from a registry, and to copy container images from one unauthenticated container registry to another. To pull the registry.redhat.io/rhel8/skopeo container image, you need an active Red Hat Enterprise Linux subscription. ( BZ#1627900 ) buildah container image is now available The registry.redhat.io/rhel8/buildah container image is a containerized implementation of the buildah package. The buildah tool facilitates building OCI container images. This container image allows you to build container images without the need to install the buildah package on your system. The use-case does not cover running this image in rootless mode as a non-root user. To pull the registry.redhat.io/rhel8/buildah container image, you need an active Red Hat Enterprise Linux subscription. ( BZ#1627898 ) Podman v2.0 RESTful API is now available The new REST based Podman 2.0 API replaces the old remote API based on the varlink library. The new API works in both a rootful and a rootless environment and provides a docker compatibility layer. (JIRA:RHELPLAN-37517) Installing Podman does not require container-selinux With this enhancement, the installation of the container-selinux package is now optional during the container build. As a result, Podman has fewer dependencies on other packages. ( BZ#1806044 ) 5.2. Important changes to external kernel parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 8.3. These changes could include for example added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters acpi_no_watchdog = [HW,ACPI,WDT] This parameter enables to ignore the Advanced Configuration and Power Interface (ACPI) based watchdog interface (WDAT) and let the native driver control the watchdog device instead. dfltcc = [HW,S390] This parameter configures the zlib hardware support for IBM Z architectures. Format: { on | off | def_only | inf_only | always } The options are: on (default) - IBM Z zlib hardware support for compression on level 1 and decompression off - No IBM Z zlib hardware support def_only - IBM Z zlib hardware support for the deflate algorithm only (compression on level 1) inf_only - IBM Z zlib hardware support for the inflate algorithm only (decompression) always - Similar as on , but ignores the selected compression level and always uses hardware support (used for debugging) irqchip.gicv3_pseudo_nmi = [ARM64] This parameter enables support for pseudo non-maskable interrupts (NMIs) in the kernel. To use this parameter you need to build the kernel with the CONFIG_ARM64_PSEUDO_NMI configuration item. panic_on_taint = Bitmask for conditionally calling panic() in add_taint() Format: <hex>[, nousertaint ] A hexadecimal bitmask which represents a set of TAINT flags that will cause the kernel to panic when the add_taint() system call is invoked with any of the flags in this set. The optional nousertaint switch prevents userspace-forced crashes by writing to the /proc/sys/kernel/tainted file any flagset that matches the bitmask in panic_on_taint . For for more information see the upstream documentation . prot_virt = [S390] Format: <bool> This parameter enables hosting of protected virtual machines which are isolated from the hypervisor if the hardware support is present. rcutree.use_softirq = [KNL] This parameter enables elimination of Tree-RCU softirq processing. If you set this parameter to zero, it moves all RCU_SOFTIRQ processing to per-CPU rcuc kthreads. If you set rcutree.use_softirq to a non-zero value (default), RCU_SOFTIRQ is used by default. Specify rcutree.use_softirq=0 to use rcuc kthreads. split_lock_detect = [X86] This parameter enables the split lock detection. When enabled, and if hardware support is present, atomic instructions that access data across cache line boundaries will result in an alignment check exception. The options are: off - not enabled warn - the kernel will emit rate limited warnings about applications that trigger the Alignment Check Exception (#AC). This mode is the default on CPUs that supports split lock detection. fatal - the kernel will send Buss error (SIGBUS) signal to applications that trigger the #AC exception. If the #AC exception is hit while not executing in the user mode, the kernel will issue an oops error in either the warn or fatal mode. srbds = [X86,INTEL] This parameter controls the Special Register Buffer Data Sampling (SRBDS) mitigation. Certain CPUs are vulnerable to a Microarchitectural Data Sampling (MDS)-like exploit which can leak bits from the random number generator. By default, microcode mitigates this issue. However, the microcode fix can cause the RDRAND and RDSEED instructions to become much slower. Among other effects, this will result in reduced throughput from the urandom kernel random number source device. To disable the microcode mitigation, set the following option: off - Disable mitigation and remove performance impact to RDRAND and RDSEED svm = [PPC] Format: { on | off | y | n | 1 | 0 } This parameter controls the use of the Protected Execution Facility on pSeries systems. nopv = [X86,XEN,KVM,HYPER_V,VMWARE] This parameter disables the PV optimizations which forces the guest to run as generic guest with no PV drivers. Currently supported are XEN HVM, KVM, HYPER_V and VMWARE guests. Updated kernel parameters hugepagesz = [HW] This parameter specifies a huge page size. Use this parameter in conjunction with the hugepages parameter to pre-allocate a number of huge pages of the specified size. Specify the hugepagesz and hugepages parameters in pairs such as: The hugepagesz parameter can only be specified once on the command line for a specific huge page size. Valid huge page sizes are architecture dependent. hugepages = [HW] This parameter specifies the number of huge pages to pre-allocate. This parameter typically follows the valid hugepagesz or default_hugepagesz parameter. However, if hugepages is the first or the only HugeTLB command-line parameter, it implicitly specifies the number of huge pages of the default size to allocate. If the number of huge pages of the default size is implicitly specified, it can not be overwritten by the hugepagesz + hugepages parameter pair for the default size. For example, on an architecture with 2M default huge page size: Settings from the example above results in allocation of 256 2M huge pages and a warning message that the hugepages=512 parameter was ignored. If hugepages is preceded by invalid hugepagesz , hugepages will be ignored. default_hugepagesz = [HW] This parameter specifies the default huge page size. You can specify default_hugepagesz only once on the command-line. Optionally, you can follow default_hugepagesz with the hugepages parameter to pre-allocate a specific number of huge pages of the default size. Also, you can implicitly specify the number of default-sized huge pages to pre-allocate. For example, on an architecture with 2M default huge page size: Settings from the example above all results in allocation of 256 2M huge pages. Valid default huge page size is architecture dependent. efi = [EFI] Format: { "old_map", "nochunk", "noruntime", "debug", "nosoftreserve" } The options are: old_map [X86-64] - Switch to the old ioremap-based EFI runtime services mapping. 32-bit still uses this one by default nochunk - Disable reading files in "chunks" in the EFI boot stub, as chunking can cause problems with some firmware implementations noruntime - Disable EFI runtime services support debug - Enable miscellaneous debug output nosoftreserve - The EFI_MEMORY_SP (Specific Purpose) attribute sometimes causes the kernel to reserve the memory range for a memory mapping driver to claim. Specify efi=nosoftreserve to disable this reservation and treat the memory by its base type (for example EFI_CONVENTIONAL_MEMORY / "System RAM"). intel_iommu = [DMAR] Intel IOMMU driver Direct Memory Access Remapping (DMAR). The added options are: nobounce (Default off) - Disable bounce buffer for untrusted devices such as the Thunderbolt devices. This will treat the untrusted devices as the trusted ones. Hence this setting might expose security risks of direct memory access (DMA) attacks. mem = nn[KMG] [KNL,BOOT] This parameter forces the usage of a specific amount of memory. The amount of memory to be used in cases as follows: For test. When the kernel is not able to see the whole system memory. Memory that lies after the mem boundary is excluded from the hypervisor, then assigned to KVM guests. [X86] Work as limiting max address. Use together with the memmap parameter to avoid physical address space collisions. Without memmap , Peripheral Component Interconnect (PCI) devices could be placed at addresses belonging to unused RAM. Note that this setting only takes effect during the boot time since in the case 3 above, the memory may need to be hot added after the boot if the system memory of hypervisor is not sufficient. pci = [PCI] Various Peripheral Component Interconnect (PCI) subsystem options. Some options herein operate on a specific device or a set of devices ( <pci_dev> ). These are specified in one of the following formats: Note that the first format specifies a PCI bus/device/function address which may change if new hardware is inserted, if motherboard firmware changes, or due to changes caused by other kernel parameters. If the domain is left unspecified, it is taken to be zero. Optionally, a path to a device through multiple device/function addresses can be specified after the base address (this is more robust against renumbering issues). The second format selects devices using IDs from the configuration space which may match multiple devices in the system. The options are: hpmmiosize - The fixed amount of bus space which is reserved for hotplug bridge's Memory-mapped I/O (MMIO) window. The default size is 2 megabytes. hpmmioprefsize - The fixed amount of bus space which is reserved for hotplug bridge's MMIO_PREF window. The default size is 2 megabytes. pcie_ports = [PCIE] Peripheral Component Interconnect Express (PCIe) port services handling. The options are: native - Use native PCIe services (PME, AER, DPC, PCIe hotplug) even if the platform does not give the OS permission to use them. This setting may cause conflicts if the platform also tries to use these services. dpc-native - Use native PCIe service for DPC only. This setting may cause conflicts if firmware uses AER or DPC. compat - Disable native PCIe services (PME, AER, DPC, PCIe hotplug). rcu_nocbs = [KNL] The argument is a CPU list. The string "all" can be used to specify every CPU on the system. usbcore.authorized_default = [USB] The default USB device authorization. The options are: -1 (Default) - Authorized except for wireless USB 0 - Not authorized 1 - Authorized 2 - Authorized if the device is connected to the internal port usbcore.old_scheme_first = [USB] This parameter enables to start with the old device initialization scheme. This setting applies only to low and full-speed devices (default 0 = off). usbcore.quirks = [USB] A list of quirk entries to augment the built-in USB core quirk list. The list entries are separated by commas. Each entry has the form VendorID:ProductID:Flags , for example quirks=0781:5580:bk,0a5c:5834:gij . The IDs are 4-digit hex numbers and Flags is a set of letters. Each letter will change the built-in quirk; setting it if it is clear and clearing it if it is set. The added flags: o - USB_QUIRK_HUB_SLOW_RESET , hub needs extra delay after resetting its port New /proc/sys/fs parameters protected_fifos This parameter is based on the restrictions in the Openwall software and provides protection by allowing to avoid unintentional writes to an attacker-controlled FIFO where a program intended to create a regular file. The options are: 0 - Writing to FIFOs is unrestricted. 1 - Does not allow the O_CREAT flag open on FIFOs that we do not own in world writable sticky directories unless they are owned by the owner of the directory. 2 - Applies to group writable sticky directories. protected_regular This parameter is similar to the protected_fifos parameter, however it avoids writes to an attacker-controlled regular file where a program intended to create one. The options are: 0 - Writing to regular files is unrestricted. 1 - Does not allow the O_CREAT flag open on regular files that we do not own in world writable sticky directories unless they are owned by the owner of the directory. 2 - Applies to group writable sticky directories. 5.3. Device Drivers 5.3.1. New drivers Network drivers CAN driver for Kvaser CAN/USB devices (kvaser_usb.ko.xz) Driver for Theobroma Systems UCAN devices (ucan.ko.xz) Pensando Ethernet NIC Driver (ionic.ko.xz) Graphics drivers and miscellaneous drivers Generic Remote Processor Framework (remoteproc.ko.xz) Package Level C-state Idle Injection for Intel(R) CPUs (intel_powerclamp.ko.xz) X86 PKG TEMP Thermal Driver (x86_pkg_temp_thermal.ko.xz) INT3402 Thermal driver (int3402_thermal.ko.xz) ACPI INT3403 thermal driver (int3403_thermal.ko.xz) Intel(R) acpi thermal rel misc dev driver (acpi_thermal_rel.ko.xz) INT3400 Thermal driver (int3400_thermal.ko.xz) Intel(R) INT340x common thermal zone handler (int340x_thermal_zone.ko.xz) Processor Thermal Reporting Device Driver (processor_thermal_device.ko.xz) Intel(R) PCH Thermal driver (intel_pch_thermal.ko.xz) DRM gem ttm helpers (drm_ttm_helper.ko.xz) Device node registration for cec drivers (cec.ko.xz) Fairchild FUSB302 Type-C Chip Driver (fusb302.ko.xz) VHOST IOTLB (vhost_iotlb.ko.xz) vDPA-based vhost backend for virtio (vhost_vdpa.ko.xz) VMware virtual PTP clock driver (ptp_vmw.ko.xz) Intel(R) LPSS PCI driver (intel-lpss-pci.ko.xz) Intel(R) LPSS core driver (intel-lpss.ko.xz) Intel(R) LPSS ACPI driver (intel-lpss-acpi.ko.xz) Mellanox watchdog driver (mlx_wdt.ko.xz) Mellanox FAN driver (mlxreg-fan.ko.xz) Mellanox regmap I/O access driver (mlxreg-io.ko.xz) Intel(R) speed select interface pci mailbox driver (isst_if_mbox_pci.ko.xz) Intel(R) speed select interface mailbox driver (isst_if_mbox_msr.ko.xz) Intel(R) speed select interface mmio driver (isst_if_mmio.ko.xz) Mellanox LED regmap driver (leds-mlxreg.ko.xz) vDPA Device Simulator (vdpa_sim.ko.xz) Intel(R) Tiger Lake PCH pinctrl/GPIO driver (pinctrl-tigerlake.ko.xz) PXA2xx SSP SPI Controller (spi-pxa2xx-platform.ko.xz) CE4100/LPSS PCI-SPI glue code for PXA's driver (spi-pxa2xx-pci.ko.xz) Hyper-V PCI Interface (pci-hyperv-intf.ko.xz) vDPA bus driver for virtio devices (virtio_vdpa.ko.xz) 5.3.2. Updated drivers Network driver updates VMware vmxnet3 virtual NIC driver (vmxnet3.ko.xz) has been updated to version 1.5.0.0-k. Realtek RTL8152/RTL8153 Based USB Ethernet Adapters (r8152.ko.xz) has been updated to version 1.09.10. Broadcom BCM573xx network driver (bnxt_en.ko.xz) has been updated to version 1.10.1. The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 4.18.0-240.el8.x86_64. Intel(R) Ethernet Switch Host Interface Driver (fm10k.ko.xz) has been updated to version 0.27.1-k. Intel(R) Ethernet Connection E800 Series Linux Driver (ice.ko.xz) has been updated to version 0.8.2-k. Storage driver updates Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:12.8.0.1. QLogic FCoE Driver (bnx2fc.ko.xz) has been updated to version 2.12.13. LSI MPT Fusion SAS 3.0 Device Driver (mpt3sas.ko.xz) has been updated to version 34.100.00.00. Driver for HP Smart Array Controller version (hpsa.ko.xz) has been updated to version 3.4.20-170-RH5. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.25.08.3-k. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.714.04.00-rh1. Graphics and miscellaneous driver updates Standalone drm driver for the VMware SVGA device (vmwgfx.ko.xz) has been updated to version 2.17.0.0. Crypto Co-processor for Chelsio Terminator cards. (chcr.ko.xz) has been updated to version 1.0.0.0-ko. 5.4. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 8.3 that have a significant impact on users. 5.4.1. Installer and image creation RHEL 8 initial setup now works properly via SSH Previously, the RHEL 8 initial setup interface did not display when logged in to the system using SSH. As a consequence, it was impossible to perform the initial setup on a RHEL 8 machine managed via SSH. This problem has been fixed, and RHEL 8 initial setup now works correctly when performed via SSH. ( BZ#1676439 ) Installation failed when using the reboot --kexec command Previously, the RHEL 8 installation failed when a Kickstart file that contained the reboot --kexec command was used. With this update, the installation with reboot --kexec now works as expected. ( BZ#1672405 ) America/New York time zone can now be set correctly Previously, the interactive Anaconda installation process did not allow users to set the America/New York time zone when using a kickstart file. With this update, users can now set America/New York as the preferred time zone in the interactive installer if a time zone is not specified in the kickstart file. (BZ#1665428) SELinux contexts are now set correctly Previously, when SELinux was in enforcing mode, incorrect SELinux contexts on some folders and files resulted in unexpected AVC denials when attempting to access these files after installation. With this update, Anaconda sets the correct SELinux contexts. As a result, you can now access the folders and files without manually relabeling the filesystem. ( BZ#1775975 ) Automatic partitioning now creates a valid /boot partition Previously, when installing RHEL on a system using automatic partitioning or using a kickstart file with preconfigured partitions, the installer created a partitioning scheme that could contain an invalid /boot partition. Consequently, the automatic installation process ended prematurely because the verification of the partitioning scheme failed. With this update, Anaconda creates a partitioning scheme that contains a valid /boot partition. As a result, the automatic installation completes as expected. (BZ#1630299) A GUI installation using the Binary DVD ISO image now completes successfully without CDN registration Previously, when performing a GUI installation using the Binary DVD ISO image file, a race condition in the installer prevented the installation from proceeding until you registered the system using the Connect to Red Hat feature. With this update, you can now proceed with the installation without registering the system using the Connect to Red Hat feature. (BZ#1823578) iSCSI or FCoE devices created in Kickstart and used in ignoredisk --only-use command no longer stop the installation process Previously, when the iSCSI or FCoE devices created in Kickstart were used in the ignoredisk --only-use command, the installation program failed with an error similar to Disk "disk/by-id/scsi-360a9800042566643352b476d674a774a" given in ignoredisk command does not exist . This stopped the installation process. With this update, the problem has been fixed. The installation program continues working. (BZ#1644662) System registration using CDN failed with the error message Name or service not known When you attempted to register a system using the Content Delivery Network (CDN), the registration process failed with the error message Name or service not known . This issue occurred because the empty Custom server URL and Custom Base URL values overwrote the default values for system registration. With this update, the empty values now do not overwrite the default values, and the system registration completes successfully. ( BZ#1862116 ) 5.4.2. Software management dnf-automatic now updates only packages with correct GPG signatures Previously, the dnf-automatic configuration file did not check GPG signatures of downloaded packages before performing an update. As a consequence, unsigned updates or updates signed by key which was not imported could be installed by dnf-automatic even though repository configuration requires GPG signature check ( gpgcheck=1 ). With this update, the problem has been fixed, and dnf-automatic checks GPG signatures of downloaded packages before performing the update. As a result, only updates with correct GPG signatures are installed from repositories that require GPG signature check. ( BZ#1793298 ) Trailing comma no longer causes entries removal in an append type option Previously, adding a trailing comma (an empty entry at the end of the list) to an append type option (for example, exclude , excludepkgs , includepkgs ) caused all entries in the option to be removed. Also, adding two commas (an empty entry) caused that only entries after the commas were used. With this update, empty entries other than leading commas (an empty entry at the beginning of the list) are ignored. As a result, only the leading comma now removes existing entries from the append type option, and the user can use it to overwrite these entries. ( BZ#1788154 ) 5.4.3. Shells and command-line tools The ReaR disk layout no longer includes entries for Rancher 2 Longhorn iSCSI devices and file systems This update removes entries for Rancher 2 Longhorn iSCSI devices and file systems from the disk layout created by ReaR . ( BZ#1843809 ) Rescue image creation with a file larger than 4 GB is now enabled on IBM POWER, little endian Previously, the ReaR utility could not create rescue images containing files larger than 4GB on IBM POWER, little endian architecture. With this update, the problem has been fixed, and it is now possible to create a rescue image with a file larger than 4 GB on IBM POWER, little endian. ( BZ#1729502 ) 5.4.4. Security SELinux no longer prevents systemd-journal-gatewayd to call newfstatat() on /dev/shm/ files used by corosync Previously, SELinux policy did not contain a rule that allows the systemd-journal-gatewayd daemon to access files created by the corosync service. As a consequence, SELinux denied systemd-journal-gatewayd to call the newfstatat() function on shared memory files created by corosync . With this update, SELinux no longer prevents systemd-journal-gatewayd to call newfstatat() on shared memory files created by corosync . (BZ#1746398) Libreswan now works with seccomp=enabled on all configurations Prior to this update, the set of allowed syscalls in the Libreswan SECCOMP support implementation did not match new usage of RHEL libraries. Consequently, when SECCOMP was enabled in the ipsec.conf file, the syscall filtering rejected even syscalls required for the proper functioning of the pluto daemon; the daemon was killed, and the ipsec service was restarted. With this update, all newly required syscalls have been allowed, and Libreswan now works with the seccomp=enabled option correctly. ( BZ#1544463 ) SELinux no longer prevents auditd to halt or power off the system Previously, the SELinux policy did not contain a rule that allows the Audit daemon to start a power_unit_file_t systemd unit. Consequently, auditd could not halt or power off the system even when configured to do so in cases such as no space left on a logging disk partition. This update of the selinux-policy packages adds the missing rule, and auditd can now properly halt and power off the system only with SELinux in enforcing mode. ( BZ#1826788 ) IPTABLES_SAVE_ON_STOP now works correctly Previously, the IPTABLES_SAVE_ON_STOP feature of the iptables service did not work because files with saved IP tables content received incorrect SELinux context. This prevented the iptables script from changing permissions, and the script subsequently failed to save the changes. This update defines a proper context for the iptables.save and ip6tables.save files, and creates a filename transition rule. As a consequence, the IPTABLES_SAVE_ON_STOP feature of the iptables service works correctly. ( BZ#1776873 ) NSCD databases can now use different modes Domains in the nsswitch_domain attribute are allowed access to Name Service Cache Daemon (NSCD) services. Each NSCD database is configured in the nscd.conf file, and the shared property determines whether the database uses Shared memory or Socket mode. Previously, all NSCD databases had to use the same access mode, depending on the nscd_use_shm boolean value. Now, using Unix stream socket is always allowed, and therefore different NSCD databases can use different modes. ( BZ#1772852 ) The oscap-ssh utility now works correctly when scanning a remote system with --sudo When performing a Security Content Automation Protocol (SCAP) scan of a remote system using the oscap-ssh tool with the --sudo option, the oscap tool on the remote system saves scan result files and report files into a temporary directory as the root user. Previously, if the umask settings on the remote machine were changed, oscap-ssh might have been prevented access to these files. This update fixes the issue, and as a result, oscap saves the files as the target user, and oscap-ssh accesses the files normally. ( BZ#1803116 ) OpenSCAP now handles remote file systems correctly Previously, OpenSCAP did not reliably detect remote file systems if their mount specification did not start with two slashes. As a consequence, OpenSCAP handled some network-based file systems as local. With this update, OpenSCAP identifies file systems using the file-system type instead of the mount specification. As a result, OpenSCAP now handles remote file systems correctly. ( BZ#1870087 ) OpenSCAP no longer removes blank lines from YAML multi-line strings Previously, OpenSCAP removed blank lines from YAML multi-line strings within generated Ansible remediations from a datastream. This affected Ansible remediations and caused the openscap utility to fail the corresponding Open Vulnerability and Assessment Language (OVAL) checks, producing false positive results. The issue is now fixed and as a result, openscap no longer removes blank lines from YAML multi-line strings. ( BZ#1795563 ) OpenSCAP can now scan systems with large numbers of files without running out of memory Previously, when scanning systems with low RAM and large numbers of files, the OpenSCAP scanner sometimes caused the system to run out of memory. With this update, OpenSCAP scanner memory management has been improved. As a result, the scanner no longer runs out of memory on systems with low RAM when scanning large numbers of files, for example package groups Server with GUI and Workstation . ( BZ#1824152 ) config.enabled now controls statements correctly Previously, the rsyslog incorrectly evaluated the config.enabled directive during the configuration processing of a statement. As a consequence, the parameter not known errors were displayed for each statement except for the include() one. With this update, the configuration is processed for all statements equally. As a result, config.enabled now correctly disables or enables statements without displaying any error. (BZ#1659383) fapolicyd no longer prevents RHEL updates When an update replaces the binary of a running application, the kernel modifies the application binary path in memory by appending the " (deleted)" suffix. Previously, the fapolicyd file access policy daemon treated such applications as untrusted, and prevented them from opening and executing any other files. As a consequence, the system was sometimes unable to boot after applying updates. With the release of the RHBA-2020:5242 advisory, fapolicyd ignores the suffix in the binary path so the binary can match the trust database. As a result, fapolicyd enforces the rules correctly and the update process can finish. ( BZ#1897090 ) The e8 profile can now be used to remediate RHEL 8 systems with Server with GUI Using the OpenSCAP Anaconda Add-on to harden the system on the Server With GUI package group with profiles that select rules from the Verify Integrity with RPM group no longer requires an extreme amount of RAM on the system. The cause of this problem was the OpenSCAP scanner. For more details, see Scanning large numbers of files with OpenSCAP causes systems to run out of memory . As a consequence, the hardening of the system using the RHEL 8 Essential Eight (e8) profile now works also with Server With GUI . (BZ#1816199) 5.4.5. Networking Automatic loading of iptables extension modules by the nft_compat module no longer hangs Previously, when the nft_compat module loaded an extension module while an operation on network name spaces ( netns ) happened in parallel, a lock collision could occur if that extension registered a pernet subsystem during initialization. As a consequence, the kernel-called modprobe command hang. This could also be caused by other services, such as libvirtd , that also execute iptables commands. This problem has been fixed. As a result, loading iptables extension modules by the nft_compat module no longer hangs. (BZ#1757933) The firewalld service now removes ipsets when the service stops Previously, stopping the firewalld service did not remove ipsets . This update fixes the problem. As a result, ipsets are no longer left in the system after firewalld stops. ( BZ#1790948 ) firewalld no longer retains ipset entries after shutdown Previously, shutting down firewalld did not remove ipset entries. Consequently, ipset entries remained active in the kernel even after stopping the firewalld service. With this fix, shutting down firewalld removes ipset entries as expected. ( BZ#1682913 ) firewalld now restores ipset entries after reloading Previously, firewalld did not retain runtime ipset entries after reloading. Consequently, users had to manually add the missing entries again. With this update, firewalld has been modified to restore ipset entries after reloading. ( BZ#1809225 ) nftables and firewalld services are now mutually exclusive Previously, it was possible to enable nftables and firewalld services at the same time. As a consequence, nftables was overriding firewalld rulesets. With this update, nftables and firewalld services are now mutually exclusive so that these cannot be enabled at the same time. ( BZ#1817205 ) 5.4.6. Kernel The huge_page_setup_helper.py script now works correctly A patch that updated the huge_page_setup_helper.py script for Python 3 was accidentally removed. Consequently, after executing huge_page_setup_helper.py , the following error message appeared: With this update, the problem has been fixed by updating the libhugetlbfs.spec file. As a result, huge_page_setup_helper.py does not show any error in the described scenario. (BZ#1823398) Systems with a large amount of persistent memory boot more quickly and without timeouts Systems with a large amount of persistent memory took a long time to boot because the original source code allowed for just one initialization thread per node. For example, for a 4-node system there were 4 memory initialization threads. Consequently, if there were persistent memory file systems listed in the /etc/fstab file, the system could time out while waiting for devices to become available. With this update, the problem has been fixed because the source code now allows for multiple memory initialization threads within a single node. As a result, the systems boot more quickly and no timeouts appear in the described scenario. (BZ#1666538) The bcc scripts now successfully compile a BPF module During the script code compilation to create a Berkeley Packet Filter (BPF) module, the bcc toolkit used kernel headers for data type definition. Some kernel headers needed the KBUILD_MODNAME macro to be defined. Consequently, those bcc scripts that did not add KBUILD_MODNAME , were likely to fail to compile a BPF module across various CPU architectures. The following bcc scripts were affected: bindsnoop sofdsnoop solisten tcpaccept tcpconnect tcpconnlat tcpdrop tcpretrans tcpsubnet tcptop tcptracer With this update, the problem has been fixed by adding KBUILD_MODNAME to the default cflags parameter for bcc . As a result, this problem no longer appears in the described scenario. Also, customer scripts do not need to define KBUILD_MODNAME themselves either. (BZ#1837906) bcc-tools and bpftrace work properly on IBM Z Previously, a feature backport introduced the ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE kernel option. However, the bcc-tools package and bpftrace tracing language package for IBM Z architectures did not have proper support for this option. Consequently, the bpf() system call failed with the Invalid argument exception and bpftrace failed with an error stating Error loading program when trying to load the BPF program. With this update, the ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE option is now removed. As a result, the problem no longer appears in the described scenario. (BZ#1847837, BZ#1853964) Boot process no longer fails due to lack of entropy Previously, the boot process failed due to lack of entropy. A better mechanism is now used to allow the kernel to gather entropy early in the boot process, which does not depend on any hardware specific interrupts. This update fixes the problem by ensuring availability of sufficient entropy to secure random generation in early boot. As a result, the fix prevents kickstart timeout or slow boots and the boot process works as expected. ( BZ#1778762 ) Repeated reboots using kexec now work as expected Previously, during the kernel reboot on the Amazon EC2 Nitro platform, the remove module ( rmmod ) was not called during the shutdown() call of the kernel execution path. Consequently, repeated kernel reboots using the kexec system call led to a failure. With this update, the issue has been fixed by adding the PCI shutdown() handler that allows safe kernel execution. As a result, repeated reboots using kexec on Amazon EC2 Nitro platforms no longer fail. (BZ#1758323) Repeated reboots using vPMEM memory as dump target now works as expected Previously, using Virtual Persistent Memory (vPMEM) namespaces as dump target for kdump or fadump caused the papr_scm module to unmap and remap the memory backed by vPMEM and re-add the memory to its linear map. Consequently, this behavior triggered Hypervisor Calls (HCalls) to POWER Hypervisor. As a result, this slows down the capture kernel boot considerably and takes a long time to save the dump file. This update fixes the problem and the boot process now works as expected in the described scenario (BZ#1792125) Attempting to add ICE driver NIC port to a mode 5 bonding master interface no longer fails Previously, attempting to add the ICE driver NIC port to a mode 5 ( balance-tlb ) bonding master interface led to a failure with an error Master 'bond0', Slave 'ens1f0': Error: Enslave failed . Consequently, you experienced an intermittent failure to add the NIC port to the bonding master interface. This update fixes the issue and adding the interface no longer fails. (BZ#1791664) The cxgb4 driver no longer causes crash in the kdump kernel Previously, the kdump kernel would crash while trying to save information in the vmcore file. Consequently, the cxgb4 driver prevented the kdump kernel from saving a core for later analysis. To work around this problem, add the novmcoredd parameter to the kdump kernel command line to allow saving core files. With the release of the RHSA-2020:1769 advisory, the kdump kernel handles this situation properly and no longer crashes. (BZ#1708456) 5.4.7. High availability and clusters When a GFS2 file system is used with the Filesystem agent the fast_stop option now defaults to no Previously, when a GFS2 file system was used with the Filesystem agent, the fast_stop option defaulted to yes . This value could result in unnecessary fence events due to the length of time it can take a GFS2 file system to unmount. With this update, this option defaults to no . For all other file systems it continues to default to yes . (BZ#1814896) fence_compute and fence_evacuate agents now interpret insecure option in a more standard way Previously, the fence_compute and fence_evacuate agents worked as if --insecure was specified by default. With this update, customers who do not use valid certificates for their compute or evacuate services must set insecure=true and use the --insecure option when running manually from the CLI. This is consistent with the behavior of all other agents. ( BZ#1830776 ) 5.4.8. Dynamic programming languages, web and database servers Optimized CPU consumption by libdb A update to the libdb database caused an excessive CPU consumption in the trickle thread. With this update, the CPU usage has been optimized. ( BZ#1670768 ) The did_you_mean Ruby gem no longer contains a file with a non-commercial license Previously, the did_you_mean gem available in the ruby:2.5 module stream contained a file with a non-commercial license. This update removes the affected file. ( BZ#1846113 ) nginx can now load server certificates from hardware security tokens through the PKCS#11 URI The ssl_certificate directive of the nginx web server supports loading TLS server certificates from hardware security tokens directly from PKCS#11 modules. Previously, it was impossible to load server certificates from hardware security tokens through the PKCS#11 URI. ( BZ#1668717 ) 5.4.9. Compilers and development tools The glibc dynamic loader no longer fails while loading a shared library that uses DT_FILTER and has a constructor Prior to this update, a defect in the dynamic loader implementation of shared objects as filters caused the dynamic loader to fail while loading a shared library that uses a filter and has a constructor. With this release, the dynamic loader implementation of filters ( DT_FILTER ) has been fixed to correctly handle such shared libraries. As a result, the dynamic loader now works as expected in the mentioned scenario. ( BZ#1812756 ) glibc can now remove pseudo-mounts from the getmntent() list The kernel includes automount pseudo-entries in the tables exposed to userspace. Consequently, programs that use the getmntent() API see both regular mounts and these pseudo-mounts in the list. The pseudo-mounts do not correspond to real mounts, nor include valid information. With this update, if the mount entry has the ignore mount option present in the automount(8) configuration the glibc library now removes these pseudo-mounts from the getmntent() list. Programs that expect the behavior have to use a different API. (BZ#1743445) The movv1qi pattern no longer causes miscompilation in the auto-vectorized code on IBM Z Prior to this update, wrong load instructions were emitted for the movv1qi pattern. As a consequence, when auto-vectorization was in effect, a miscompilation could occur on IBM Z systems. This update fixes the movv1qi pattern, and as a result, code compiles and runs correctly now. (BZ#1784758) PAPI_event_name_to_code() now works correctly in multiple threads Prior to this update, the PAPI internal code did not handle thread coordination properly. As a consequence, when multiple threads used the PAPI_event_name_to_code() operation, a race condition occurred and the operation failed. This update enhances the handling of multiple threads in the PAPI internal code. As a result, multithreaded code using the PAPI_event_name_to_code() operation now works correctly. (BZ#1807346) Improved performance for the glibc math functions on IBM Power Systems Previously, the glibc math functions performed unnecessary floating point status updates and system calls on IBM Power Systems, which negatively affected the performance. This update removes the unnecessary floating point status update, and improves the implementations of: ceil() , ceilf() , fegetmode() , fesetmode() , fesetenv() , fegetexcept() , feenableexcept() , fedisablexcept() , fegetround() and fesetround() . As a result, the performance of the math library is improved on IBM Power Systems. (BZ#1783303) Memory protection keys are now supported on IBM Power On IBM Power Systems, the memory protection key interfaces pkey_set and pkey_get were previously stub functions, and consequently always failed. This update implements the interfaces, and as a result, the GNU C Library ( glibc ) now supports memory protection keys on IBM Power Systems. Note that memory protection keys currently require the hash-based memory management unit (MMU), therefore you might have to boot certain systems with the disable_radix kernel parameter. (BZ#1642150) papi-testsuite and papi-devel now install the required papi-libs package Previously, the papi-testsuite and papi-devel RPM packages did not declare a dependency on the matching papi-libs package. Consequently, the tests failed to run, and developers did not have the required version of the papi shared library available for their applications. With this update, when the user installs either the papi-testsuite or papi-devel packages, the papi-libs package is also installed. As a result, the papi-testsuite now has the correct library allowing the tests to run, and developers using papi-devel have their executables linked with the appropriate version of the papi shared library. ( BZ#1664056 ) Installing the lldb packages for multiple architectures no longer leads to file conflicts Previously, the lldb packages installed architecture-dependent files in architecture-independent locations. As a consequence, installing both 32-bit and 64-bit versions of the packages led to file conflicts. This update packages the files in correct architecture-dependent locations. As a result, the installation of lldb in the described scenario completes successfully. (BZ#1841073) getaddrinfo now correctly handles a memory allocation failure Previously, after a memory allocation failure, the getaddrinfo function of the GNU C Library glibc did not release the internal resolver context. As a consequence, getaddrinfo was not able to reload the /etc/resolv.conf file for the rest of the lifetime of the calling thread, resulting in a possible memory leak. This update modifies the error handling path with an additional release operation for the resolver context. As a result, getaddrinfo reloads /etc/resolv.conf with new configuration values even after an intermittent memory allocation failure. ( BZ#1810146 ) glibc avoids certain failures caused by IFUNC resolver ordering Previously, the implementation of the librt and libpthread libraries of the GNU C Library glibc contained the indirect function (IFUNC) resolvers for the following functions: clock_gettime , clock_getcpuclockid , clock_nanosleep , clock_settime , vfork . In some cases, the IFUNC resolvers could execute before the librt and libpthread libraries were relocated. Consequently, applications would fail in the glibc dynamic loader during early program startup. With this release, the implementations of these functions have been moved into the libc component of glibc , which prevents the described problem from occurring. ( BZ#1748197 ) Assertion failures no longer occur during pthread_create Previously, the glibc dynamic loader did not roll back changes to the internal Thread Local Storage (TLS) module ID counter. As a consequence, an assertion failure in the pthread_create function could occur after the dlopen function had failed in certain ways. With this fix, the glibc dynamic loader updates the TLS module ID counter at a later point in time, after certain failures can no longer happen. As a result, the assertion failures no longer occur. ( BZ#1774115 ) glibc now installs correct dependencies for 32-bit applications using nss_db Previously, the nss_db.x86_64 package did not declare dependencies on the nss_db.i686 package. Therefore automated installation did not install nss_db.i686 on the system, despite having a 32-bit environment glibc.i686 installed. As a consequence, 32-bit applications using nss_db failed to perform accurate user database lookups, while 64-bit applications in the same setup worked correctly. With this update, the glibc packages now have weak dependencies that trigger the installation of the nss_db.i686 package when both glibc.i686 and nss_db are installed on the system. As a result, 32-bit applications using nss_db now work correctly, even if the system administrator has not explicitly installed the nss_db.i686 package. ( BZ#1807824 ) glibc locale information updated with Odia language The name of Indian state previously known as Orissa has changed to Odisha, and the name of its official language has changed from Oriya to Odia. With this update, the glibc locale information reflects the new name of the language. ( BZ#1757354 ) LLVM sub packages now install arch-dependent files in arch-dependent locations Previously, LLVM sub packages installed arch-dependent files in arch-independent locations. This resulted in conflicts when installing 32 and 64 bit versions of LLVM. With this update, package files are now correctly installed in arch-dependent locations, avoiding version conflicts. (BZ#1820319) Password and group lookups no longer fail in glibc Previously, the nss_compat module of the glibc library overwrote the errno status with incorrect error codes during processing of password and group entries. Consequently, applications did not resize buffers as expected, causing password and group lookups to fail. This update fixes the problem, and the lookups now complete as expected. ( BZ#1836867 ) 5.4.10. Identity Management SSSD no longer downloads every rule with a wildcard character by default Previously, the ldap_sudo_include_regexp option was incorrectly set to true by default. As a consequence, when SSSD started running or after updating SSSD rules, SSSD downloaded every rule that contained a wildcard character ( * ) in the sudoHost attribute. This update fixes the bug, and the ldap_sudo_include_regexp option is now properly set to false by default. As a result, the described problem no longer occurs. ( BZ#1827615 ) krb5 now only requests permitted encryption types Previously, permitted encryption types specified in the permitted_enctypes variable in the /etc/krb5.conf file did not apply to the default encryption types if the default_tgs_enctypes or default_tkt_enctypes attributes were not set. Consequently, Kerberos clients were able to request deprecated cipher suites like RC4, which may cause other processes to fail. With this update, encryption types specified in the permitted_enctypes variable apply to the default encryption types as well, and only permitted encryption types are requested. The RC4 cipher suite, which has been deprecated in RHEL 8, is the default encryption type for users, services, and trusts between Active Directory (AD) domains in an AD forest. To ensure support for strong AES encryption types between AD domains in an AD forest, see the AD DS: Security: Kerberos "Unsupported etype" error when accessing a resource in a trusted domain Microsoft article. To enable support for the deprecated RC4 encryption type in an IdM server for backwards compatibility with AD, use the update-crypto-policies --set DEFAULT:AD-SUPPORT command. (BZ#1791062) KDCs now correctly enforce password lifetime policy from LDAP backends Previously, non-IPA Kerberos Distribution Centers (KDCs) did not ensure maximum password lifetimes because the Kerberos LDAP backend incorrectly enforced password policies. With this update, the Kerberos LDAP backend has been fixed, and password lifetimes behave as expected. ( BZ#1784655 ) Password expiration notifications sent to AD clients using SSSD Previously, Active Directory clients (non-IdM) using SSSD were not sent password expiration notices because of a recent change in the SSSD interface for acquiring Kerberos credentials. The Kerberos interface has been updated and expiration notices are now sent correctly. ( BZ#1820311 ) Directory Server no longer leaks memory when using indirect COS definitions Previously, after processing an indirect Class Of Service (COS) definition, Directory Server leaked memory for each search operation that used an indirect COS definition. With this update, Directory Server frees all internal COS structures associated with the database entry after it has been processed. As a result, the server no longer leaks memory when using indirect COS definitions. ( BZ#1816862 ) Adding ID overrides of AD users now works in IdM Web UI Previously, adding ID overrides of Active Directory (AD) users to Identity Management (IdM) groups in the Default Trust View for the purpose of granting access to management roles failed when using the IdM Web UI. This update fixes the bug. As a result, you can now use both the Web UI as well as the IdM command-line interface (CLI) in this scenario. ( BZ#1651577 ) FreeRADIUS no longer generates certificates during package installation Previously, FreeRADIUS generated certificates during package installation, resulting in the following issues: If FreeRADIUS was installed using Kickstart, certificates might be generated at a time when entropy on the system was insufficient, resulting in either a failed installation or a less secure certificate. The package was difficult to build as part of an image, such as a container, because the package installation occurs on the builder machine instead of the target machine. All instances that are spawned from the image had the same certificate information. It was difficult for an end-user to generate a simple VM in their environment as the certificates would have to be removed and regenerated manually. With this update, the FreeRADIUS installation no longer generates default self-signed CA certificates nor subordinate CA certificates. When FreeRADIUS is launched via systemd : If all of the required certificates are missing, a set of default certificates are generated. If one or more of the expected certificates are present, it does not generate new certificates. ( BZ#1672285 ) FreeRADIUS now generates FIPS-compliant Diffie-Hellman parameters Due to new FIPS requirements that do not allow openssl to generate Diffie-Hellman (dh) parameters via dhparam , the dh parameter generation has been removed from the FreeRADIUS bootstrap scripts and the file, rfc3526-group-18-8192.dhparam , is included with the FreeRADIUS packages for all systems, and thus enables FreeRADIUS to start in FIPS mode. Note that you can customize /etc/raddb/certs/bootstrap and /etc/raddb/certs/Makefile to restore the DH parameter generation if required. ( BZ#1859527 ) Updating Healthcheck now properly updates both ipa-healthcheck-core and ipa-healthcheck Previously, entering yum update healthcheck did not update the ipa-healthcheck package but replaced it with the ipa-healthcheck-core package. As a consequence, the ipa-healthcheck command did not work after the update. This update fixes the bug, and updating ipa-healthcheck now correctly updates both the ipa-healthcheck package and the ipa-healthcheck-core package. As a result, the Healthcheck tool works correctly after the update. ( BZ#1852244 ) 5.4.11. Graphics infrastructures Laptops with hybrid Nvidia GPUs can now successfully resume from suspend Previously, the nouveau graphics driver sometimes could not power on hybrid Nvidia GPUs on certain laptops from power-save mode. As a result, the laptops failed to resume from suspend. With this update, several problems in the Runtime Power Management ( runpm ) system have been fixed. As a result, the laptops with hybrid graphics can now successfully resume from suspend. (JIRA:RHELPLAN-57572) 5.4.12. Virtualization Migrating virtual machines with the default CPU model now works more reliably Previously, if a virtual machine (VM) was created without a specific CPU model, QEMU used a default model that was not visible to the libvirt service. As a consequence, it was possible to migrate the VM to a host that did not support the default CPU model of the VM, which sometimes caused crashes and incorrect behavior in the guest OS after the migration. With this update, libvirt explicitly uses the qemu64 model as default in the XML configuration of the VM. As a result, if the user attempts migrating a VM with the default CPU model to a host that does not support that model, libvirt correctly generates an error message. Note, however, that Red Hat strongly recommends using a specific CPU model for your VMs. (JIRA:RHELPLAN-45906) 5.4.13. Containers Notes on FIPS support with Podman The Federal Information Processing Standard (FIPS) requires certified modules to be used. Previously, Podman correctly installed certified modules in containers by enabling the proper flags at startup. However, in this release, Podman does not properly set up the additional application helpers normally provided by the system in the form of the FIPS system-wide crypto-policy. Although setting the system-wide crypto-policy is not required by the certified modules it does improve the ability of applications to use crypto modules in compliant ways. To work around this problem, change your container to run the update-crypto-policies --set FIPS command before any other application code was executed. The update-crypto-policies --set FIPS command is no longer required with this fix. ( BZ#1804193 ) 5.5. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.3. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 5.5.1. Networking Enabled the xt_u32 Netfilter module The xt_u32 Netfilter module is now available in the kernel-modules-extra rpm. This module helps in packet forwarding based on the data that is inaccessible to other protocol-based packet filters and thus eases manual migration to nftables . However, xt_u32 Netfilter module is not supported by Red Hat. (BZ#1834769) nmstate available as a Technology Preview Nmstate is a network API for hosts. The nmstate packages, available as a Technology Preview, provide a library and the nmstatectl command-line utility to manage host network settings in a declarative manner. The networking state is described by a pre-defined schema. Reporting of the current state and changes to the desired state both conform to the schema. For further details, see the /usr/share/doc/nmstate/README.md file and the examples in the /usr/share/doc/nmstate/examples directory. (BZ#1674456) AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. (BZ#1633143) XDP available as a Technology Preview The eXpress Data Path (XDP) feature, which is available as a Technology Preview, provides a means to attach extended Berkeley Packet Filter (eBPF) programs for high-performance packet processing at an early point in the kernel ingress data path, allowing efficient programmable packet analysis, filtering, and manipulation. (BZ#1503672) KTLS available as a Technology Preview In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality. (BZ#1570255) XDP features that are available as Technology Preview Red Hat provides the usage of the following eXpress Data Path (XDP) features as unsupported Technology Preview: Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the libxdp library is not available for architectures other than AMD and Intel 64-bit. The XDP_TX and XDP_REDIRECT return codes. The XDP hardware offloading. Before using this feature, see Unloading XDP programs on Netronome network cards that use the nfp driver fails . ( BZ#1889737 ) act_mpls module available as a Technology Preview The act_mpls module is now available in the kernel-modules-extra rpm as a Technology Preview. The module allows the application of Multiprotocol Label Switching (MPLS) actions with Traffic Control (TC) filters, for example, push and pop MPLS label stack entries with TC filters. The module also allows the Label, Traffic Class, Bottom of Stack, and Time to Live fields to be set independently. (BZ#1839311) Multipath TCP is now available as a Technology Preview Multipath TCP (MPTCP), an extension to TCP, is now available as a Technology Preview. MPTCP improves resource usage within the network and resilience to network failure. For example, with Multipath TCP on the RHEL server, smartphones with MPTCP v1 enabled can connect to an application running on the server and switch between Wi-Fi and cellular networks without interrupting the connection to the server. Note that either the applications running on the server must natively support MPTCP or administrators must load an eBPF program into the kernel to dynamically change IPPROTO_TCP to IPPROTO_MPTCP . For further details see, Getting started with Multipath TCP . (JIRA:RHELPLAN-41549) The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. (BZ#1906489) 5.5.2. Kernel The kexec fast reboot feature is available as Technology Preview The kexec fast reboot feature continues to be available as a Technology Preview. kexec fast reboot significantly speeds the boot process by allowing the kernel to boot directly into the second kernel without passing through the Basic Input/Output System (BIOS) first. To use this feature: Load the kexec kernel manually. Reboot the operating system. ( BZ#1769727 ) eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which supports creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf (2) man page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: bpftrace , a high-level tracing language that utilizes the eBPF virtual machine. AF_XDP , a socket for connecting the eXpress Data Path (XDP) path to user space for applications that prioritize packet processing performance. (BZ#1559616) The igc driver available as a Technology Preview for RHEL 8 The igc Intel 2.5G Ethernet Linux wired LAN driver is now available on all architectures for RHEL 8 as a Technology Preview. The ethtool utility also supports igc wired LANs. (BZ#1495358) Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol which implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which supports two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. (BZ#1605216) 5.5.3. File systems and storage NVMe/TCP is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme-tcp.ko and nvmet-tcp.ko kernel modules have been added as a Technology Preview. The use of NVMe/TCP as either a storage client or a target is manageable with tools provided by the nvme-cli and nvmetcli packages. The NVMe/TCP target Technology Preview is included only for testing purposes and is not currently planned for full support. (BZ#1696451) File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1627455) OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . (BZ#1690207) Stratis is now available as a Technology Preview Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . RHEL 8.3 updates Stratis to version 2.1.0. For more information, see Stratis 2.1.0 Release Notes . (JIRA:RHELPLAN-1212) IdM now supports setting up a Samba server on an IdM domain member as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . (JIRA:RHELPLAN-13195) 5.5.4. High availability and clusters Local mode version of pcs cluster setup command available as a technology preview By default, the pcs cluster setup command automatically synchronizes all configuration files to the cluster nodes. In Red Hat Enterprise Linux 8.3, the pcs cluster setup command provides the --corosync-conf option as a technology preview. Specifying this option switches the command to local mode. In this mode, pcs creates a corosync.conf file and saves it to a specified file on the local node only, without communicating with any other node. This allows you to create a corosync.conf file in a script and handle that file by means of the script. ( BZ#1839637 ) Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on the podman container platform, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack. (BZ#1619620) Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. ( BZ#1784200 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1775847) 5.5.5. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . ( BZ#1664719 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. ( BZ#1664718 ) 5.5.6. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is now available for the 64-bit ARM architecture as a Technology Preview. This enables administrators to configure and manage servers from a graphical user interface (GUI) remotely, using the VNC session. As a consequence, new administration applications are available on the 64-bit ARM architecture. For example: Disk Usage Analyzer ( baobab ), Firewall Configuration ( firewall-config ), Red Hat Subscription Manager ( subscription-manager ), or the Firefox web browser. Using Firefox , administrators can connect to the local Cockpit daemon remotely. (JIRA:RHELPLAN-27394, BZ#1667225, BZ#1667516, BZ#1724302) GNOME desktop on IBM Z is available as a Technology Preview The GNOME desktop, including the Firefox web browser, is now available as a Technology Preview on the IBM Z architecture. You can now connect to a remote graphical session running GNOME using VNC to configure and manage your IBM Z servers. (JIRA:RHELPLAN-27737) 5.5.7. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. (BZ#1698565) Intel Tiger Lake graphics available as a Technology Preview Intel Tiger Lake UP3 and UP4 Xe graphics are now available as a Technology Preview. To enable hardware acceleration with Intel Tiger Lake graphics, add the following option on the kernel command line: In this option, replace pci-id with one of the following: The PCI ID of your Intel GPU The * character to enable the i915 driver with all alpha-quality hardware (BZ#1783396) 5.5.8. Red Hat Enterprise Linux system roles The postfix role of RHEL system roles available as a Technology Preview Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. The postfix role is available as a Technology Preview. The following roles are fully supported: kdump network selinux storage timesync For more information, see the Knowledgebase article about RHEL system roles . ( BZ#1812552 ) 5.5.9. Virtualization KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization (BZ#1519039) AMD SEV for KVM virtual machines As a Technology Preview, RHEL 8 introduces the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware. Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 509 running VMs using SEV. Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM's XML configuration: The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater. (BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677) Intel vGPU As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, assigning a physical GPU to VMs makes it impossible for the host to use the GPU, and may prevent graphical display output on the host from working. (BZ#1528684) Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on AMD64 and IBM Z systems hosts with RHEL 8. With this feature, a RHEL 7 or RHEL 8 VM that runs on a physical RHEL 8 host can act as a hypervisor, and host its own VMs. Note that in RHEL 8.2 and later, nested virtualization is fully supported for VMs running on an Intel 64 host. (JIRA:RHELPLAN-14047, JIRA:RHELPLAN-24437) Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine The feature is currently supported with Microsoft Windows Server 2019 and 2016. (BZ#1348508) 5.5.10. Containers podman container image is available as a Technology Preview The registry.redhat.io/rhel8/podman container image is a containerized implementation of the podman package. The podman tool is used for managing containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman is based on the libpod library for container lifecycle management. The libpod library provides APIs for managing containers, pods, container images, and volumes. This container image allows create, modify and run container images without the need to install the podman package on your system. The use-case does not cover running this image in rootless mode as a non-root user. To pull the registry.redhat.io/rhel8/podman container image, you need an active Red Hat Enterprise Linux subscription. ( BZ#1627899 ) crun is available as a Technology Preview The crun OCI runtime has been added to the container-rools:rhl8 module. The crun provides an access to run with cgoupsV2. The crun supports an annotation that allows the container to access the rootless users additional groups. This is useful for volume mounting in a directory that the user only have group access to, or the directory is setgid on it. (BZ#1841438) The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. (JIRA:RHELDOCS-16861) 5.6. Deprecated functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8. Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 5.6.1. Installer and image creation Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs. auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. (BZ#1642765) The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. (BZ#1637872) lorax-composer back end for Image Builder is deprecated in RHEL 8 The back end lorax-composer for Image Builder is considered deprecated. It will only receive select fixes for the rest of the Red Hat Enterprise Linux 8 life cycle and will be omitted from future major releases. Red Hat recommends that you uninstall lorax-composer the and install osbuild-composer back end instead. See Composing a customized RHEL system image for more details. ( BZ#1893767 ) 5.6.2. Software management rpmbuild --sign is deprecated With this update, the rpmbuild --sign command has become deprecated. Using this command in future releases of Red Hat Enterprise Linux can result in an error. It is recommended that you use the rpmsign command instead. ( BZ#1688849 ) 5.6.3. Shells and command-line tools Metalink support for curl has been disabled. A flaw was found in curl functionality in the way it handles credentials and file hash mismatch for content downloaded using the Metalink. This flaw allows malicious actors controlling a hosting server to: Trick users into downloading malicious content Gain unauthorized access to provided credentials without the user's knowledge The highest threat from this vulnerability is confidentiality and integrity. To avoid this, the Metalink support for curl has been disabled from Red Hat Enterprise Linux 8.2.0.z. As a workaround, execute the following command, after the Metalink file is downloaded: For example: (BZ#1999620) 5.6.4. Infrastructure services mailman is deprecated With this update, the mailman packages have been marked as deprecated and will not be available in the future major releases of Red Hat Enterprise Linux. (BZ#1890976) 5.6.5. Security NSS SEED ciphers are deprecated The Mozilla Network Security Services ( NSS ) library will not support TLS cipher suites that use a SEED cipher in a future release. To ensure smooth transition of deployments that rely on SEED ciphers when NSS removes support, Red Hat recommends enabling support for other cipher suites. Note that SEED ciphers are already disabled by default in RHEL. ( BZ#1817533 ) TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. ( BZ#1660839 ) DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. (BZ#1646541) SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed completely in future releases of Red Hat Enterprise Linux 8. (BZ#1645153) TPM 1.2 is deprecated The Trusted Platform Module (TPM) secure cryptoprocessor standard version was updated to version 2.0 in 2016. TPM 2.0 provides many improvements over TPM 1.2, and it is not backward compatible with the version. TPM 1.2 is deprecated in RHEL 8, and it might be removed in the major release. (BZ#1657927) 5.6.6. Networking Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. (BZ#1647725) 5.6.7. Kernel Installing RHEL for Real Time 8 using diskless boot is now deprecated Diskless booting allows multiple systems to share a root file system via the network. While convenient, diskless boot is prone to introducing network latency in realtime workloads. With a future minor update of RHEL for Real Time 8, the diskless booting feature will no longer be supported. ( BZ#1748980 ) The qla3xxx driver is deprecated The qla3xxx driver has been deprecated in RHEL 8. The driver will likely not be supported in future major releases of this product, and thus it is not recommended for new deployments. (BZ#1658840) The dl2k , dnet , ethoc , and dlci drivers are deprecated The dl2k , dnet , ethoc , and dlci drivers have been deprecated in RHEL 8. The drivers will likely not be supported in future major releases of this product, and thus they are not recommended for new deployments. (BZ#1660627) The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as an unsupported Technology Preview. However, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. (BZ#1878207) 5.6.8. File systems and storage The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the Tuned service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see Setting the disk scheduler . (BZ#1665295) LVM mirror is deprecated The LVM mirror segment type is now deprecated. Support for mirror will be removed in a future major release of RHEL. Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror . The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 logical volume . LVM mirror has several known issues. For details, see known issues in file systems and storage . (BZ#1827628) peripety is deprecated The peripety package is deprecated since RHEL 8.3. The Peripety storage event notification daemon parses system storage logs into structured storage events. It helps you investigate storage issues. ( BZ#1871953 ) NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. (BZ#1592011) cramfs has been deprecated Due to lack of users, the cramfs kernel module is deprecated. squashfs is recommended as an alternative solution. (BZ#1794513) 5.6.9. Identity Management openssh-ldap has been deprecated The openssh-ldap subpackage has been deprecated in Red Hat Enterprise Linux 8 and will be removed in RHEL 9. As the openssh-ldap subpackage is not maintained upstream, Red Hat recommends using SSSD and the sss_ssh_authorizedkeys helper, which integrate better with other IdM solutions and are more secure. By default, the SSSD ldap and ipa providers read the sshPublicKey LDAP attribute of the user object, if available. Note that you cannot use the default SSSD configuration for the ad provider or IdM trusted domains to retrieve SSH public keys from Active Directory (AD), since AD does not have a default LDAP attribute to store a public key. To allow the sss_ssh_authorizedkeys helper to get the key from SSSD, enable the ssh responder by adding ssh to the services option in the sssd.conf file. See the sssd.conf(5) man page for details. To allow sshd to use sss_ssh_authorizedkeys , add the AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys and AuthorizedKeysCommandUser nobody options to the /etc/ssh/sshd_config file as described by the sss_ssh_authorizedkeys(1) man page. ( BZ#1871025 ) DES and 3DES encryption types have been removed Due to security reasons, the Data Encryption Standard (DES) algorithm has been deprecated and disabled by default since RHEL 7. With the recent rebase of Kerberos packages, single-DES (DES) and triple-DES (3DES) encryption types have been removed from RHEL 8. If you have configured services or users to only use DES or 3DES encryption, you might experience service interruptions such as: Kerberos authentication errors unknown enctype encryption errors Kerberos Distribution Centers (KDCs) with DES-encrypted Database Master Keys ( K/M ) fail to start Perform the following actions to prepare for the upgrade: Check if your KDC uses DES or 3DES encryption with the krb5check open source Python scripts. See krb5check on GitHub. If you are using DES or 3DES encryption with any Kerberos principals, re-key them with a supported encryption type, such as Advanced Encryption Standard (AES). For instructions on re-keying, see Retiring DES from MIT Kerberos Documentation. Test independence from DES and 3DES by temporarily setting the following Kerberos options before upgrading: In /var/kerberos/krb5kdc/kdc.conf on the KDC, set supported_enctypes and do not include des or des3 . For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set allow_weak_crypto to false . It is false by default. For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set permitted_enctypes , default_tgs_enctypes , and default_tkt_enctypes and do not include des or des3 . If you do not experience any service interruptions with the test Kerberos settings from the step, remove them and upgrade. You do not need those settings after upgrading to the latest Kerberos packages. ( BZ#1877991 ) The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. (JIRA:RHELDOCS-16612) 5.6.10. Desktop The libgnome-keyring library has been deprecated The libgnome-keyring library has been deprecated in favor of the libsecret library, as libgnome-keyring is not maintained upstream, and does not follow the necessary cryptographic policies for RHEL. The new libsecret library is the replacement that follows the necessary security standards. (BZ#1607766) The AlternateTab extension has been removed The gnome-shell-extension-alternate-tab package, which provides the AlternateTab GNOME Shell extension, has been removed. To configure the window-switching behavior, set a keyboard shortcut in keyboard settings. For more information, see the following article: Using Alternate-Tab in Gnome 3.32 or later . (BZ#1922488) 5.6.11. Graphics infrastructures AGP graphics cards are no longer supported Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat Enterprise Linux 8. Use the graphics cards with PCI-Express bus as the recommended replacement. (BZ#1569610) 5.6.12. The web console The web console no longer supports incomplete translations The RHEL web console no longer provides translations for languages that have translations available for less than 50 % of the Console's translatable strings. If the browser requests translation to such a language, the user interface will be in English instead. ( BZ#1666722 ) 5.6.13. Red Hat Enterprise Linux System Roles The geoipupdate package has been deprecated The geoipupdate package requires a third-party subscription and it also downloads proprietary content. Therefore, the geoipupdate package has been deprecated, and will be removed in the major RHEL version. (BZ#1874892) 5.6.14. Virtualization SPICE has been deprecated The SPICE remote display protocol has become deprecated. As a result, SPICE will remain supported in RHEL 8, but Red Hat recommends using alternate solutions for remote display streaming: For remote console access, use the VNC protocol. For advanced remote display functions, use third party tools such as RDP, HP RGS, or Mechdyne TGX. Note that the QXL graphics device, which is used by SPICE, has become deprecated as well. (BZ#1849563) virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL 8 web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager may not be yet available the RHEL 8 web console. (JIRA:RHELPLAN-10304) Virtual machine snapshots are not properly supported in RHEL 8 The current mechanism of creating virtual machine (VM) snapshots has been deprecated, as it is not working reliably. As a consequence, it is recommended not to use VM snapshots in RHEL 8. Note that a new VM snapshot mechanism is under development and will be fully implemented in a future minor release of RHEL 8. ( BZ#1686057 ) The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga or virtio-vga devices instead of Cirrus VGA. (BZ#1651994) 5.6.15. Containers Podman varlink-based REST API V1 has been deprecated The Podman varlink-based REST API V1 has been deprecated upstream in favor of the new Podman REST API V2. This functionality will be removed in a later release of Red Hat Enterprise Linux 8. (JIRA:RHELPLAN-60226) 5.6.16. Deprecated packages The following packages have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux: 389-ds-base-legacy-tools authd custodia hostname libidn lorax-composer mercurial net-tools network-scripts nss-pam-ldapd sendmail yp-tools ypbind ypserv 5.7. Known issues This part describes known issues in Red Hat Enterprise Linux 8.3. 5.7.1. Installer and image creation The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) Network access is not enabled by default in the installation program Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled. To work around this problem, add ip=dhcp to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used. (BZ#1757877) The new osbuild-composer back end does not replicate the blueprint state from lorax-composer on upgrades Image Builder users that are upgrading from the lorax-composer back end to the new osbuild-composer back end, blueprints can disappear. As a result, once the upgrade is complete, the blueprints do not display automatically. To work around this problem, perform the following steps. Prerequisites You have the composer-cli CLI utility installed. Procedure Run the command to load the lorax-composer based blueprints into the new osbuild-composer back end: As a result, the same blueprints are now available in osbuild-composer back end. Additional resources For more details about this Known Issue, see the Image Builder blueprints are no longer present following an update to Red Hat Enterprise Linux 8.3 article. ( BZ#1897383 ) Self-signed HTTPS server cannot be used in Kickstart installation Currently, the installer fails to install from a self-signed https server when the installation source is specified in the kickstart file and the --noverifyssl option is used: To work around this problem, append the inst.noverifyssl parameter to the kernel command line when starting the kickstart installation. For example: (BZ#1745064) GUI installation might fail if an attempt to unregister using the CDN is made before the repository refresh is completed Since RHEL 8.2, when registering your system and attaching subscriptions using the Content Delivery Network (CDN), a refresh of the repository metadata is started by the GUI installation program. The refresh process is not part of the registration and subscription process, and as a consequence, the Unregister button is enabled in the Connect to Red Hat window. Depending on the network connection, the refresh process might take more than a minute to complete. If you click the Unregister button before the refresh process is completed, the GUI installation might fail as the unregister process removes the CDN repository files and the certificates required by the installation program to communicate with the CDN. To work around this problem, complete the following steps in the GUI installation after you have clicked the Register button in the Connect to Red Hat window: From the Connect to Red Hat window, click Done to return to the Installation Summary window. From the Installation Summary window, verify that the Installation Source and Software Selection status messages in italics are not displaying any processing information. When the Installation Source and Software Selection categories are ready, click Connect to Red Hat . Click the Unregister button. After performing these steps, you can safely unregister the system during the GUI installation. (BZ#1821192) Registration fails for user accounts that belong to multiple organizations Currently, when you attempt to register a system with a user account that belongs to multiple organizations, the registration process fails with the error message You must specify an organization for new units . To work around this problem, you can either: Use a different user account that does not belong to multiple organizations. Use the Activation Key authentication method available in the Connect to Red Hat feature for GUI and Kickstart installations. Skip the registration step in Connect to Red Hat and use Subscription Manager to register your system post-installation. ( BZ#1822880 ) RHEL installer fails to start when InfiniBand network interfaces are configured using installer boot options When you configure InfiniBand network interfaces at an early stage of RHEL installation using installer boot options (for example, to download installer image using PXE server), the installer fails to activate the network interfaces. This issue occurs because the RHEL NetworkManager fails to recognize the network interfaces in InfiniBand mode, and instead configures Ethernet connections for the interfaces. As a result, connection activation fails, and if the connectivity over the InfiniBand interface is required at an early stage, RHEL installer fails to start the installation. To workaround this issue, create a new installation media including the updated Anaconda and NetworkManager packages, using the Lorax tool. For more information about creating a new installation media including the updated Anaconda and NetworkManager packages, using the Lorax tool, see Unable to install Red Hat Enterprise Linux 8.3.0 with InfiniBand network interfaces (BZ#1890261) Anaconda installation fails when NVDIMM device namespace set to devdax mode. Anaconda installation fails with a trackback after booting with NVDIMM device namespace set to devdax mode before the GUI installation. To workaround this problem, reconfigure the NVDIMM device to set the namespace to a different mode than the devdax mode before the installation begins. As a result, you can proceed with the installation. (BZ#1891827) Local Media installation source is not detected when booting the installation from a USB that is created using a third party tool When booting the RHEL installation from a USB that is created using a third party tool, the installer fails to detect the Local Media installation source (only 'Red Hat CDN' is detected). This issue occurs because the default boot option int.stage2= attempts to search for iso9660 image format. However, a third party tool might create an ISO image with a different format. As a workaround, use either of the following solution: When booting the installation, click the Tab key to edit the kernel command line, and change the boot option inst.stage2= to inst.repo= . To create a bootable USB device on Windows, use Fedora Media Writer. When using a third party tool like Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device. For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto detected during the installation of RHEL 8.3 (BZ#1877697) Anaconda now shows a dialog for ldl or unformatted DASD disks in text mode Previously, during an installation in text mode, Anaconda failed to show a dialog for Linux disk layout ( ldl ) or unformatted Direct-Access Storage Device (DASD) disks. As a result, users were unable to utilize those disks for the installation. With this update, in text mode Anaconda recognizes ldl and unformatted DASD disks and shows a dialog where users can format them properly for the future utilization for the installation. (BZ#1874394) Red Hat Insights client fails to register the operating system when using the graphical installer Currently, the installation fails with an error at the end, which points to the Insights client. To work around this problem, uncheck the Connect to Red Hat Insights option during the Connect to Red Hat step before registering the systems in the installer. As a result, you can complete the installation and register to Insights afterwards by using this command: ( BZ#1931069 ) 5.7.2. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output. In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. ( BZ#1687900 ) 5.7.3. Infrastructure services libmaxminddb-devel-debuginfo.rpm is removed when running dnf update When performing the dnf update command, the binary mmdblookup tool is moved from the libmaxminddb-devel subpackage to the main libmaxmindb package. Consequently, the libmaxminddb-devel-debuginfo.rpm is removed, which might create a broken update path for this package. To work around this problem, remove the libmaxminddb-devel-debuginfo prior to the execution of the dnf update command. Note: libmaxminddb-debuginfo is the new debuginfo package. (BZ#1642001) 5.7.4. Security Users can run sudo commands as locked users In systems where sudoers permissions are defined with the ALL keyword, sudo users with permissions can run sudo commands as users whose accounts are locked. Consequently, locked and expired accounts can still be used to execute commands. To work around this problem, enable the newly implemented runas_check_shell option together with proper settings of valid shells in /etc/shells . This prevents attackers from running commands under system accounts such as bin . (BZ#1786990) GnuTLS fails to resume current session with the NSS server When resuming a TLS (Transport Layer Security) 1.3 session, the GnuTLS client waits 60 milliseconds plus an estimated round trip time for the server to send session resumption data. If the server does not send the resumption data within this time, the client creates a new session instead of resuming the current session. This incurs no serious adverse effects except for a minor performance impact on a regular session negotiation. ( BZ#1677754 ) libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the dnf install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. (BZ#1666328) udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. ( BZ#1763210 ) Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. (JIRA:RHELPLAN-10431) File permissions of /etc/passwd- are not aligned with the CIS RHEL 8 Benchmark 1.0.0 Because of an issue with the CIS Benchmark, the remediation of the SCAP rule that ensures permissions on the /etc/passwd- backup file configures permissions to 0644 . However, the CIS Red Hat Enterprise Linux 8 Benchmark 1.0.0 requires file permissions 0600 for that file. As a consequence, the file permissions of /etc/passwd- are not aligned with the benchmark after remediation. ( BZ#1858866 ) SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. (JIRA:RHELPLAN-34199) ssh-keyscan cannot retrieve RSA keys of servers in FIPS mode The SHA-1 algorithm is disabled for RSA signatures in FIPS mode, which prevents the ssh-keyscan utility from retrieving RSA keys of servers operating in that mode. To work around this problem, use ECDSA keys instead, or retrieve the keys locally from the /etc/ssh/ssh_host_rsa_key.pub file on the server. ( BZ#1744108 ) OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. (BZ#1685470) OpenSSL in FIPS mode accepts only specific D-H parameters In FIPS mode, Transport Security Layer (TLS) clients that use OpenSSL return a bad dh value error and abort TLS connections to servers that use manually generated parameters. This is because OpenSSL, when configured to work in compliance with FIPS 140-2, works only with D-H parameters compliant to NIST SP 800-56A rev3 Appendix D (groups 14, 15, 16, 17, and 18 defined in RFC 3526 and with groups defined in RFC 7919). Also, servers that use OpenSSL ignore all other parameters and instead select known parameters of similar size. To work around this problem, use only the compliant groups. (BZ#1810911) Removing the rpm-plugin-selinux package leads to removing all selinux-policy packages from the system Removing the rpm-plugin-selinux package disables SELinux on the machine. It also removes all selinux-policy packages from the system. Repeated installation of the rpm-plugin-selinux package then installs the selinux-policy-minimum SELinux policy, even if the selinux-policy-targeted policy was previously present on the system. However, the repeated installation does not update the SELinux configuration file to account for the change in policy. As a consequence, SELinux is disabled even upon reinstallation of the rpm-plugin-selinux package. To work around this problem: Enter the umount /sys/fs/selinux/ command. Manually install the missing selinux-policy-targeted package. Edit the /etc/selinux/config file so that the policy is equal to SELINUX=enforcing . Enter the command load_policy -i . As a result, SELinux is enabled and running the same policy as before. (BZ#1641631) systemd service cannot execute commands from arbitrary paths The systemd service cannot execute commands from /home/user/bin arbitrary paths because the SELinux policy package does not include any such rule. Consequently, the custom services that are executed on non-system paths fail and eventually logs the Access Vector Cache (AVC) denial audit messages when SELinux denied access. To work around this problem, do one of the following: Execute the command using a shell script with the -c option. For example, Execute the command from a common path using /bin , /sbin , /usr/sbin , /usr/local/bin , and /usr/local/sbin common directories. ( BZ#1860443 ) rpm_verify_permissions fails in the CIS profile The rpm_verify_permissions rule compares file permissions to package default permissions. However, the Center for Internet Security (CIS) profile, which is provided by the scap-security-guide packages, changes some file permissions to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions fails. To work around this problem, manually verify that these files have the following permissions: /etc/cron.d (0700) /etc/cron.hourly (0700) /etc/cron.monthly (0700) /etc/crontab (0600) /etc/cron.weekly (0700) /etc/cron.daily (0700) ( BZ#1843913 ) Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8 The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap which might cause confusion. That is done to preserve backward compatibility with Red Hat Enterprise Linux 7. (BZ#1665082) Certain sets of interdependent rules in SSG can fail Remediation of SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules. ( BZ#1750755 ) OSCAP Anaconda Addon does not install all packages in text mode The OSCAP Anaconda Addon plugin cannot modify the list of packages selected for installation by the system installer if the installation is running in text mode. Consequently, when a security policy profile is specified using Kickstart and the installation is running in text mode, any additional packages required by the security policy are not installed during installation. To work around this problem, either run the installation in graphical mode or specify all packages that are required by the security policy profile in the security policy in the %packages section in your Kickstart file. As a result, packages that are required by the security policy profile are not installed during RHEL installation without one of the described workarounds, and the installed system is not compliant with the given security policy profile. ( BZ#1674001 ) OSCAP Anaconda Addon does not correctly handle customized profiles The OSCAP Anaconda Addon plugin does not properly handle security profiles with customizations in separate files. Consequently, the customized profile is not available in the RHEL graphical installation even when you properly specify it in the corresponding Kickstart section. To work around this problem, follow the instructions in the Creating a single SCAP data stream from an original DS and a tailoring file Knowledgebase article. As a result of this workaround, you can use a customized SCAP profile in the RHEL graphical installation. (BZ#1691305) OSPP-based profiles are incompatible with GUI package groups. GNOME packages installed by the Server with GUI package group require the nfs-utils package that is not compliant with the Operating System Protection Profile (OSPP). As a consequence, selecting the Server with GUI package group during the installation of a system with OSPP or OSPP-based profiles, for example, Security Technical Implementation Guide (STIG), OpenSCAP displays a warning that the selected package group is not compatible with the security policy. If the OSPP-based profile is applied after the installation, the system is not bootable. To work around this problem, do not install the Server with GUI package group or any other groups that install GUI when using the OSPP profile and OSPP-based profiles. When you use the Server or Minimal Install package groups instead, the system installs without issues and works correctly. ( BZ#1787156 ) Installation with the Server with GUI or Workstation software selections and CIS security profile is not possible The CIS security profile is not compatible with the Server with GUI and Workstation software selections. As a consequence, a RHEL 8 installation with the Server with GUI software selection and CIS profile is not possible. An attempted installation using the CIS profile and either of these software selections will generate the error message: To work around the problem, do not use the CIS security profile with the Server with GUI or Workstation software selections. ( BZ#1843932 ) Remediating service-related rules during kickstart installations might fail During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues. ( BZ#1834716 ) Certain rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog : To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. ( BZ#1679512 ) crypto-policies incorrectly allow Camellia ciphers The RHEL 8 system-wide cryptographic policies should disable Camellia ciphers in all policy levels, as stated in the product documentation. However, the Kerberos protocol enables the ciphers by default. To work around the problem, apply the NO-CAMELLIA subpolicy: In the command, replace DEFAULT with the cryptographic level name if you have switched from DEFAULT previously. As a result, Camellia ciphers are correctly disallowed across all applications that use system-wide crypto policies only when you disable them through the workaround. ( BZ#1919155 ) 5.7.5. Networking The iptables utility now requests module loading for commands that update a chain regardless of the NLM_F_CREATE flag Previously, when setting a chain's policy, the iptables-nft utility generated a NEWCHAIN message but did not set the NLM_F_CREATE flag. As a consequence, the RHEL 8 kernel did not load any modules and the resulting update chain command failed if the associated kernel modules were not manually loaded. With this update, the iptables-nft utility now requests module loading for all commands that update a chain and users are able to set a chain's policy using the iptables-nft utility without manually loading the associated modules. (BZ#1812666) Support for updating packet/byte counters in the kernel was changed incorrectly between RHEL 7 and RHEL 8 When referring to an ipset command with enabled counters from an iptables rule, which specifies additional constraints on matching ipset entries, the ipset counters are updated only if all the additional constraints match. This is also problematic with --packets-gt or --bytes-gt constraints. As a result, when migrating an iptables ruleset from RHEL 7 to RHEL 8, the rules involving ipset lookups may stop working and need to be adjusted. To work around this problem, avoid using the --packets-gt or --bytes-gt options and replace them with the --packets-lt or --bytes-lt options. (BZ#1806882) Unloading XDP programs fails on Netronome network cards that use the nfp driver The nfp driver for Netronome network cards contains a bug. Therefore, unloading eXpress Data Path (XDP) programs fails if you use such cards and load the XDP program using the IFLA_XDP_EXPECTED_FD feature with the XDP_FLAGS_REPLACE flag. For example, this bug affects XDP programs that are loaded using the libxdp library. Currently, there is no workaround available for the problem. ( BZ#1880268 ) Anaconda does not have network access when using DHCP in the ip boot option The initial RAM disk ( initrd ) uses NetworkManager to manage networking. The dracut NetworkManager module provided by the RHEL 8.3 ISO file incorrectly assumes that the first field of the ip option in the Anaconda boot options is always set. As a consequence, if you use DHCP and set ip=::::<host_name>::dhcp , NetworkManager does not retrieve an IP address, and the network is not available in Anaconda. You have the following options to work around the problem: Set the first field in the ip`option to `. (period): Note that this work around will not work in future versions of RHEL when the problem has been fixed. Re-create the boot.iso file using the latest packages from the BaseOS repository that contains a fix for the bug: . . Note that Red Hat does not support self-created ISO files. As a result, RHEL retrieves an IP address from the DHCP server, and network access is available in Anaconda. (BZ#1902791) 5.7.6. Kernel The tboot-1.9.12-2 utility causes a boot failure in RHEL 8 The tboot utility of version 1.9.12-2 causes some systems with Trusted Platform Module (TPM) 2.0 to fail to boot in legacy mode. As a consequence, the system halts once it attempts to boot from the tboot Grand Unified Bootloader (GRUB) entry. To workaround this problem, downgrade to tboot of version 1.9.10. (BZ#1947839) The kernel returns false positive warnings on IBM Z systems In RHEL 8, IBM Z systems are missing a whitelist entry for the ZONE_DMA memory zone to allow user access. Consequently, the kernel returns false positive warnings such as: The warnings appear when accessing certain system information through the sysfs interface. For example, by running the debuginfo.sh script. To work around this problem, add the hardened_usercopy=off parameter to the kernel command line. As a result, no warning messages are displayed in the described scenario. (BZ#1660290) The rngd service busy wait causes total CPU consumption in FIPS mode A new kernel entropy source for FIPS mode has been added for kernels starting with version 4.18.0-193.10. Consequently, when in FIPS mode, the rngd service busy waits on the poll() system call for the /dev/random device, thereby causing consumption of 100% of CPU time. To work around this problem, stop and disable rngd by running: As a result, rngd no longer busy waits on poll() in the described scenario. (BZ#1884857) softirq changes can cause the localhost interface to drop UDP packets when under heavy load Changes in the Linux kernel's software interrupt ( softirq ) handling are done to reduce denial of service (DOS) effects. Consequently, this leads to situations where the localhost interface drops User Datagram Protocol (UDP) packets under heavy load. To work around this problem, increase the size of the network device backlog buffer to value 6000: In Red Hat tests, this value was sufficient to prevent packet loss. More heavily loaded systems might require larger backlog values. Increased backlogs have the effect of potentially increasing latency on the localhost interface. The result is to increase the buffer and allow more packets to be waiting for processing, which reduces the chances of dropping localhost packets. (BZ#1779337) A vmcore capture fails after memory hot-plug or unplug operation After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet: A little-endian variant of IBM Power System runs RHEL 8. The kdump or fadump service is enabled on the system. Consequently, the capture kernel fails to save vmcore if a kernel crash is triggered after the memory hot-plug or hot-unplug operation. To work around this problem, restart the kdump service after hot-plug or hot-unplug: As a result, vmcore is successfully saved in the described scenario. (BZ#1793389) Using irqpoll causes vmcore generation failure Due to an existing problem with the nvme driver on the 64-bit ARM architectures that run on the Amazon Web Services (AWS) cloud platforms, the vmcore generation fails when you provide the irqpoll kernel command line parameter to the first kernel. Consequently, no vmcore file is dumped in the /var/crash/ directory after a kernel crash. To work around this problem: Add irqpoll to the KDUMP_COMMANDLINE_REMOVE key in the /etc/sysconfig/kdump file. Restart the kdump service by running the systemctl restart kdump command. As a result, the first kernel boots correctly and the vmcore file is expected to be captured upon the kernel crash. Note that the kdump service can use a significant amount of crash kernel memory to dump the vmcore file. Ensure that the capture kernel has sufficient memory available for the kdump service. (BZ#1654962) Debug kernel fails to boot in crash capture environment in RHEL 8 Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment. (BZ#1659609) zlib may slow down a vmcore capture in some compression functions The kdump configuration file uses the lzo compression format ( makedumpfile -l ) by default. When you modify the configuration file using the zlib compression format, ( makedumpfile -c ) it is likely to bring a better compression factor at the expense of slowing down the vmcore capture process. As a consequence, it takes the kdump upto four times longer to capture a vmcore with zlib , as compared to lzo . As a result, Red Hat recommends using the default lzo for cases where speed is the main driving factor. However, if the target machine is low on available space, zlib is a better option. (BZ#1790635) The HP NMI watchdog does not always generate a crash dump In certain cases, the hpwdt driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. The missing NMI is initiated by one of two conditions: The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user. The hpwdt watchdog. The expiration by default sends an NMI to the server. Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic() function and if configured, the kdump service generates a vmcore file. Because of the missing NMI, however, kernel panic() is not called and vmcore is not collected. In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server. In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR). The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency. (BZ#1602962) The tuned-adm profile powersave command causes the system to become unresponsive Executing the tuned-adm profile powersave command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave profile if your system matches the mentioned specifications. (BZ#1609288) The default 7 4 1 7 printk value sometimes causes temporary system unresponsiveness The default 7 4 1 7 printk value allows for better debugging of the kernel activity. However, when coupled with a serial console, this printk setting can cause intense I/O bursts that can lead to a RHEL system becoming temporarily unresponsive. To work around this problem, we have added a new optimize-serial-console TuneD profile, which reduces the default printk value to 4 4 1 7 . Users can instrument their system as follows: Having a lower printk value persistent across a reboot reduces the likelihood of system hangs. Note that this setting change comes at the expense of losing the extra debugging information. For more information about the newly added feature, see A new optimize-serial-console TuneD profile to reduce I/O to serial consoles by lowering the printk value . (JIRA:RHELPLAN-28940) The kernel ACPI driver reports it has no access to a PCIe ECAM memory region The Advanced Configuration and Power Interface (ACPI) table provided by firmware does not define a memory region on the PCI bus in the Current Resource Settings (_CRS) method for the PCI bus device. Consequently, the following warning message occurs during the system boot: However, the kernel is still able to access the 0x30000000-0x31ffffff memory region, and can assign that memory region to the PCI Enhanced Configuration Access Mechanism (ECAM) properly. You can verify that PCI ECAM works correctly by accessing the PCIe configuration space over the 256 byte offset with the following output: As a result, you can ignore the warning message. For more information about the problem, see the "Firmware Bug: ECAM area mem 0x30000000-0x31ffffff not reserved in ACPI namespace" appears during system boot solution. (BZ#1868526) The OPEN MPI library may trigger run-time failures with default PML In OPEN Message Passing Interface (OPEN MPI) implementation 4.0.x series, Unified Communication X (UCX) is the default point-to-point communicator (PML). The later versions of OPEN MPI 4.0.x series deprecated openib Byte Transfer Layer (BTL). However, OPEN MPI, when run over a homogeneous cluster (same hardware and software configuration), UCX still uses openib BTL for MPI one-sided operations. As a consequence, this may trigger execution errors. To work around this problem: Run the mpirun command using following parameters: where, The -mca btl openib parameter disables openib BTL The -mca pml ucx parameter configures OPEN MPI to use ucx PML. The x UCX_NET_DEVICES= parameter restricts UCX to use the specified devices The OPEN MPI, when run over a heterogeneous cluster (different hardware and software configuration), it uses UCX as the default PML. As a consequence, this may cause the OPEN MPI jobs to run with erratic performance, unresponsive behavior, or crash failures. To work around this problem, set the UCX priority as: Run the mpirun command using following parameters: As a result, the OPEN MPI library is able to choose an alternative available transport layer over UCX. (BZ#1866402) 5.7.7. File systems and storage The /boot file system cannot be placed on LVM You cannot place the /boot file system on an LVM logical volume. This limitation exists for the following reasons: On EFI systems, the EFI System Partition conventionally serves as the /boot file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition. RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the /boot file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the /boot configuration defined by the uEFI standard. The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS. Red Hat does not plan to support /boot on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot file system to be placed on an LVM logical volume. (BZ#1496229) LVM no longer allows creating volume groups with mixed block sizes LVM utilities such as vgcreate or vgextend no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size. To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1 option in the lvm.conf file. ( BZ#1768536 ) Limitations of LVM writecache The writecache LVM caching method has the following limitations, which are not present in the cache method: You cannot name a writecache logical volume when using pvmove commands. You cannot use logical volumes with writecache in combination with thin pools or VDO. The following limitation also applies to the cache method: You cannot resize a logical volume while cache or writecache is attached to it. (JIRA:RHELPLAN-27987, BZ#1798631 , BZ#1808012) LVM mirror devices that store a LUKS volume sometimes become unresponsive Mirrored LVM devices with a segment type of mirror that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations. To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror if you need to stack LUKS volumes on top of resilient software-defined storage. The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 device . (BZ#1730502) An NFS 4.0 patch can result in reduced performance under an open-heavy workload Previously, a bug was fixed that, in some cases, could cause an NFS open operation to overlook the fact that a file had been removed or renamed on the server. However, the fix may cause slower performance with workloads that require many open operations. To work around this problem, it might help to use NFS version 4.1 or higher, which have been improved to grant delegations to clients in more cases, allowing clients to perform open operations locally, quickly, and safely. (BZ#1748451) 5.7.8. Dynamic programming languages, web and database servers getpwnam() might fail when called by a 32-bit application When a user of NIS uses a 32-bit application that calls the getpwnam() function, the call fails if the nss_nis.i686 package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686 command. ( BZ#1803161 ) Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might terminate unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. With this update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. (BZ#1819607) PAM plug-in does not work in MariaDB MariaDB 10.3 provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. The MariaDB PAM plug-in version 1.0 does not work in RHEL 8. To work around this problem, use the PAM plug-in version 2.0 provided by the mariadb:10.5 module stream, which is available with RHEL 8.4. ( BZ#1942330 ) 5.7.9. Identity Management Installing KRA fails if all KRA members are hidden replicas The ipa-kra-install utility fails on a cluster where the Key Recovery Authority (KRA) is already present, if the first KRA instance is installed on a hidden replica. Consequently, you cannot add further KRA instances to the cluster. To work around this problem, unhide the hidden replica that has the KRA role before you add new KRA instances. You can hide it again when ipa-kra-install completes successfully. ( BZ#1816784 ) Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. ( BZ#1729215 ) Certificates issued by PKI ACME Responder connected to PKI CA may fail OCSP validation The default ACME certificate profile provided by PKI CA contains a sample OCSP URL that does not point to an actual OCSP service. As a consequence, if PKI ACME Responder is configured to use a PKI CA issuer, the certificates issued by the responder may fail OCSP validation. To work around this problem, you need to set the policyset.serverCertSet.5.default.params.authInfoAccessADLocation_0 property to a blank value in the /usr/share/pki/ca/profiles/ca/acmeServerCert.cfg configuration file: In the ACME Responder configuration file, change the line policyset.serverCertSet.5.default.params.authInfoAccessADLocation_0=http://ocsp.example.com to policyset.serverCertSet.5.default.params.authInfoAccessADLocation_0= . Restart the service and regenerate the certificate. As a result, PKI CA will generate ACME certificates with an autogenerated OCSP URL that points to an actual OCSP service. ( BZ#1868233 ) FreeRADIUS silently truncates Tunnel-Passwords longer than 249 characters If a Tunnel-Password is longer than 249 characters, the FreeRADIUS service silently truncates it. This may lead to unexpected password incompatibilities with other systems. To work around the problem, choose a password that is 249 characters or fewer. ( BZ#1723362 ) The /var/log/lastlog sparse file on IdM hosts can cause performance problems During the IdM installation, a range of 200,000 UIDs from a total of 10,000 possible ranges is randomly selected and assigned. Selecting a random range in this way significantly reduces the probability of conflicting IDs in case you decide to merge two separate IdM domains in the future. However, having high UIDs can create problems with the /var/log/lastlog file. For example, if a user with the UID of 1280000008 logs in to an IdM client, the local /var/log/lastlog file size increases to almost 400 GB. Although the actual file is sparse and does not use all that space, certain applications are not designed to identify sparse files by default and may require a specific option to handle them. For example, if the setup is complex and a backup and copy application does not handle sparse files correctly, the file is copied as if its size was 400 GB. This behavior can cause performance problems. To work around this problem: In case of a standard package, refer to its documentation to identify the option that handles sparse files. In case of a custom application, ensure that it is able to manage sparse files such as /var/log/lastlog correctly. (JIRA:RHELPLAN-59111) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 5.7.10. Desktop Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. ( BZ#1668760 ) Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. ( BZ#1717947 ) Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host. (BZ#1583445) 5.7.11. Graphics infrastructures radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will work successfully. (BZ#1694705) Multiple HDR displays on a single MST topology may not power on On systems using NVIDIA Turing GPUs with the nouveau driver, using a DisplayPort hub (such as a laptop dock) with multiple monitors which support HDR plugged into it may result in failure to turn on. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays. (BZ#1812577) Unable to run graphical applications using sudo command When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland is restricted by the Xauthority file to use regular user credentials for authentication. To work around this problem, use the sudo -E command to run graphical applications as a root user. ( BZ#1673073 ) VNC Viewer displays wrong colors with the 16-bit color depth on IBM Z The VNC Viewer application displays wrong colors when you connect to a VNC session on an IBM Z server with the 16-bit color depth. To work around the problem, set the 24-bit color depth on the VNC server. With the Xvnc server, replace the -depth 16 option with -depth 24 in the Xvnc configuration. As a result, VNC clients display the correct colors but use more network bandwidth with the server. ( BZ#1886147 ) Hardware acceleration is not supported on ARM Built-in graphics drivers do not support hardware acceleration or the Vulkan API on the 64-bit ARM architecture. To enable hardware acceleration or Vulkan on ARM, install the proprietary Nvidia driver. (JIRA:RHELPLAN-57914) The RHEL installer becomes unresponsive with NVIDIA Ampere RHEL 8.3.0 does not support the NVIDIA Ampere GPUs. If you start the RHEL installation on a system that has an NVIDIA Ampere GPU, the installer becomes unresponsive. As a consequence, the installation cannot finish successfully. The NVIDIA Ampere family includes the following GPU models: GeForce RTX 3060 Ti GeForce RTX 3070 GeForce RTX 3080 GeForce RTX 3090 RTX A6000 NVIDIA A40 NVIDIA A100 NVIDIA A100 80GB To work around the problem, disable the nouveau graphics driver and install RHEL in text mode: Boot into the boot menu of the installer. Add the nouveau.modeset=0 option on the kernel command line. For details, see Editing boot options . Install RHEL on the system. Boot into the newly installed RHEL. At the boot menu, add the nouveau.modeset=0 option on the kernel command line. Disable the nouveau driver permanently: As a result, the installation has finished successfully and RHEL now runs in text mode. Optionally, you can install the proprietary NVIDIA GPU driver to enable graphics. For instructions, see How to install the NVIDIA proprietary driver on RHEL 8 . (BZ#1903890) 5.7.12. The web console Unprivileged users can access the Subscriptions page If a non-administrator navigates to the Subscriptions page of the web console, the web console displays a generic error message Cockpit had an unexpected internal error . To work around this problem, sign in to the web console with a privileged user and make sure to check the Reuse my password for privileged tasks checkbox. ( BZ#1674337 ) 5.7.13. Red Hat Enterprise Linux system roles oVirt input and the elasticsearch output functionalities are not supported in system roles Logging The oVirt input and the elasticsearch output are not supported in system roles Logging although they are mentioned in the README file. There is no workaround available at the moment. ( BZ#1889468 ) 5.7.14. Virtualization Displaying multiple monitors of virtual machines that use Wayland is not possible with QXL Using the remote-viewer utility to display more than one monitor of a virtual machine (VM) that is using the Wayland display server causes the VM to become unresponsive and the Waiting for display status message to be displayed indefinitely. To work around this problem, use virtio-gpu instead of qxl as the GPU device for VMs that use Wayland. (BZ#1642887) virsh iface-\* commands do not work consistently Currently, virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications. (BZ#1664592) Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. ( BZ#1719687 ) Attaching LUN devices to virtual machines using virtio-blk does not work The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller. Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk' option rather than device='lun' . (BZ#1777138) Virtual machines using Cooperlake cannot boot when TSX is disabled on the host Virtual machines (VMs) that use the Cooperlake CPU model currently fail to boot when the TSX CPU flag is diabled on the host. Instead, the host displays the following error message: To make VMs with Cooperlake usable on such host, disable the HLE, RTM, and TAA_NO flags in the VM configuration in the VM's XML configuration: ( BZ#1860743 ) Virtual machines sometimes cannot boot on Witherspoon hosts Virtual machines (VMs) that use the pseries-rhel7.6.0-sxxm machine type in some cases fail to boot on Power9 S922LC for HPC hosts (also known as Witherspoon) that use the DD2.2 or DD2.3 CPU. Attempting to boot such a VM instead generates the following error message: To work around this problem, configure the VM's XML configuration as follows: ( BZ#1732726 ) 5.7.15. RHEL in cloud environments GPU problems on Azure NV6 instances When running RHEL 8 as a guest operating system on a Microsoft Azure NV6 instance, resuming the virtual machine (VM) from hibernation sometimes causes the VM's GPU to work incorrectly. When this occurs, the kernel logs the following message: (BZ#1846838) kdump sometimes does not start on Azure and Hyper-V On RHEL 8 guest operating systems hosted on the Microsoft Azure or Hyper-V hypervisors, starting the kdump kernel in some cases fails when post-exec notifiers are enabled. To work around this problem, disable crash kexec post notifiers: (BZ#1865745) Setting static IP in a RHEL 8 virtual machine on a VMWare host does not work Currently, when using RHEL 8 as a guest operating system of a virtual machine (VM) on a VMWare host, the DatasourceOVF function does not work correctly. As a consequence, if you use use the cloud-init utility to set the the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. ( BZ#1750862 ) Core dumping RHEL 8 virtual machines with certain NICs to a remote machine on Azure takes longer than expected Currently, using the kdump utility to save the core dump file of a RHEL 8 virtual machine (VM) on a Microsoft Azure hypervisor to a remote machine does not work correctly when the VM is using a NIC with enabled accelerated networking. As a consequence, the dump file is saved after approximately 200 seconds, instead of immediately. In addition, the following error message is logged on the console before the dump file is saved. (BZ#1854037) TX/RX packet counters do not increase after virtual machines resume from hibernation The TX/RX packet counters stop increasing when a RHEL 8 virtual machine (VM), with a CX4 VF NIC, resumes from hibernation on Microsoft Azure. To keep the counters working, restart the VM. Note that, doing so will reset the counters. (BZ#1876527) RHEL 8 virtual machines fail to resume from hibernation on Azure The GUID of the virtual function (VF), vmbus device , changes when a RHEL 8 virtual machine (VM), with SR-IOV enabled, is hibernated and deallocated on Microsoft Azure . As a result, when the VM is restarted, it fails to resume and crashes. As a workaround, hard reset the VM using the Azure serial console. (BZ#1876519) Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a "Migration status: active" status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. (BZ#1741436) 5.7.16. Supportability redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. ( BZ#1802026 ) 5.7.17. Containers UDICA is not expected to work with 1.0 stable stream UDICA, the tool to generate SELinux policies for containers, is not expected to work with containers that are run via podman 1.0.x in the container-tools:1.0 module stream. (JIRA:RHELPLAN-25571) podman system connection add does not automatically set the default connection The podman system connection add command does not automatically set the first connection to be the default connection. To set the default connection, you must manually run the command podman system connection default <connection_name> . ( BZ#1881894 )
[ "tuned-adm profile throughput-performance optimize-serial-console", "cat /proc/sys/kernel/printk", "echo 'add_dracutmodules+=\" network-legacy \"' > /etc/dracut.conf.d/enable-network-legacy.conf dracut -vf --regenerate-all", "ip=__IP_address__:__peer__:__gateway_IP_address__:__net_mask__:__host_name__:__interface_name__:__configuration_method__", "grubby --args=\"page_owner=on\" --update-kernel=0 reboot", "perf script record flamegraph -F 99 -g -- stress --cpu 1 --vm-bytes 128M --timeout 10s stress: info: [4461] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd stress: info: [4461] successful run completed in 10s [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.060 MB perf.data (970 samples) ] perf script report flamegraph dumping data to flamegraph.html", "pcs resource op add my-rsc promote on-fail=\"demote\"", "pcs resource op add my-rsc monitor interval=\"10s\" on-fail=\"demote\" role=\"Master\"", "yum module install ruby:2.7", "yum module install nodejs:14", "yum module install php:7.4", "yum module install nginx:1.18", "CustomLog journald:priority format|nickname", "CustomLog journald:info combined", "sudo dnf install -y dotnet-sdk-5.0", "yum install gcc-toolset-10", "scl enable gcc-toolset-10 tool", "scl enable gcc-toolset-10 bash", "podman pull registry.redhat.io/<image_name>", "update-crypto-policies --set DEFAULT:AD-SUPPORT", "update-crypto-policies --set DEFAULT:AD-SUPPORT", "yum install gnome-session-kiosk-session", "#!/bin/sh gedit &", "chmod +x USDHOME/.local/bin/redhat-kiosk", "ip link add link en2 name mymacvtap0 address 52:54:00:11:11:11 type macvtap mode bridge chown myuser /dev/tapUSD(cat /sys/class/net/mymacvtap0/ifindex) ip link set mymacvtap0 up", "<interface type='ethernet'> <model type='virtio'/> <mac address='52:54:00:11:11:11'/> <target dev='mymacvtap0' managed='no'/> </interface>", "<channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "hugepagesz=2M hugepages=512", "hugepages=256 hugepagesz=2M hugepages=512", "hugepages=256 default_hugepagesz=2M hugepages=256 hugepages=256 default_hugepagesz=2M", "[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]* pci:<vendor>:<device>[:<subvendor>:<subdevice>]", "SyntaxError: Missing parentheses in call to 'print'", "xfs_info /mount-point | grep ftype", "i915.force_probe= pci-id", "<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>", "wget --trust-server-names --input-metalink`", "wget --trust-server-names --input-metalink <(curl -s USDURL)", "update-crypto-policies --set LEGACY", "~]# yum install network-scripts", "for blueprint in USD(find /var/lib/lorax/composer/blueprints/git/workspace/master -name '*.toml'); do composer-cli blueprints push \"USD{blueprint}\"; done", "url --url=https://SERVER/PATH --noverifyssl", "inst.ks=<URL> inst.noverifyssl", "insights-client --register", "dnf module enable libselinux-python dnf install libselinux-python", "dnf module install libselinux-python:2.8/common", "SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2", "bash -c command", "package xorg-x11-server-common has been added to the list of excluded packages, but it can't be removed from the current software selection without breaking the installation.", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL", "update-crypto-policies --set DEFAULT:NO-CAMELLIA", "ip=.::::<host_name>::dhcp", "lorax '--product=Red Hat Enterprise Linux' --version=8.3 --release=8.3 --source=<URL_to_BaseOS_repository> --source=<URL_to_AppStream_repository> --nomacboot --buildarch=x86_64 '--volid=RHEL 8.3' <output_directory>", "Bad or missing usercopy whitelist? Kernel memory exposure attempt detected from SLUB object 'dma-kmalloc-192' (offset 0, size 144)! WARNING: CPU: 0 PID: 8519 at mm/usercopy.c:83 usercopy_warn+0xac/0xd8", "systemctl stop rngd systemctl disable rngd", "echo 6000 > /proc/sys/net/core/netdev_max_backlog", "systemctl restart kdump.service", "tuned-adm profile throughput-performance optimize-serial-console", "[ 2.817152] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 0x30000000-0x31ffffff] not reserved in ACPI namespace [ 2.827911] acpi PNP0A08:00: ECAM at [mem 0x30000000-0x31ffffff] for [bus 00-1f]", "03:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express]) Capabilities: [900 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us", "-mca btl openib -mca pml ucx -x UCX_NET_DEVICES=mlx5_ib0", "-mca pml_ucx_priority 5", "The guest operating system reported that it failed with the following error code: 0x1E", "dracut_args --omit-drivers \"radeon\" force_rebuild 1", "echo 'blacklist nouveau' >> /etc/modprobe.d/blacklist.conf", "the CPU is incompatible with host CPU: Host CPU does not provide required features: hle, rtm", "<feature policy='disable' name='hle'/> <feature policy='disable' name='rtm'/> <feature policy='disable' name='taa-no'/>", "qemu-kvm: Requested safe indirect branch capability level not supported by kvm", "<domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <qemu:commandline> <qemu:arg value='-machine'/> <qemu:arg value='cap-ibs=workaround'/> </qemu:commandline>", "hv_irq_unmask() failed: 0x5", "echo N > /sys/module/kernel/parameters/crash_kexec_post_notifiers", "device (eth0): linklocal6: DAD failed for an EUI-64 address" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.3_release_notes/rhel-8-3-0-release
probe::sunrpc.svc.create
probe::sunrpc.svc.create Name probe::sunrpc.svc.create - Create an RPC service Synopsis sunrpc.svc.create Values bufsize the buffer size pg_nvers the number of supported versions progname the name of the program prog the number of the program
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-svc-create
Chapter 9. availability
Chapter 9. availability This chapter describes the commands under the availability command. 9.1. availability zone list List availability zones and their status Usage: Table 9.1. Command arguments Value Summary -h, --help Show this help message and exit --compute List compute availability zones --network List network availability zones --volume List volume availability zones --long List additional fields in output Table 9.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 9.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 9.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 9.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack availability zone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--compute] [--network] [--volume] [--long]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/availability
Chapter 2. CRUSH Administration
Chapter 2. CRUSH Administration The Controlled Replication Under Scalable Hashing (CRUSH) algorithm determines how to store and retrieve data by computing data storage locations. Any sufficiently advanced technology is indistinguishable from magic. -- Arthur C. Clarke 2.1. Introducing CRUSH The CRUSH map for your storage cluster describes your device locations within CRUSH hierarchies and a rule for each hierarchy that determines how Ceph stores data. The CRUSH map contains at least one hierarchy of nodes and leaves. The nodes of a hierarchy, called "buckets" in Ceph, are any aggregation of storage locations as defined by their type. For example, rows, racks, chassis, hosts, and devices. Each leaf of the hierarchy consists essentially of one of the storage devices in the list of storage devices. A leaf is always contained in one node or "bucket." A CRUSH map also has a list of rules that determine how CRUSH stores and retrieves data. Note Storage devices are added to the CRUSH map when adding an OSD to the cluster. The CRUSH algorithm distributes data objects among storage devices according to a per-device weight value, approximating a uniform probability distribution. CRUSH distributes objects and their replicas or erasure-coding chunks according to the hierarchical cluster map an administrator defines. The CRUSH map represents the available storage devices and the logical buckets that contain them for the rule, and by extension each pool that uses the rule. To map placement groups to OSDs across failure domains or performance domains, a CRUSH map defines a hierarchical list of bucket types; that is, under types in the generated CRUSH map. The purpose of creating a bucket hierarchy is to segregate the leaf nodes by their failure domains or performance domains or both. Failure domains include hosts, chassis, racks, power distribution units, pods, rows, rooms, and data centers. Performance domains include failure domains and OSDs of a particular configuration. For example, SSDs, SAS drives with SSD journals, SATA drives, and so on. Devices have the notion of a class , such as hdd , ssd and nvme to more rapidly build CRUSH hierarchies with a class of devices. With the exception of the leaf nodes representing OSDs, the rest of the hierarchy is arbitrary, and you can define it according to your own needs if the default types do not suit your requirements. We recommend adapting your CRUSH map bucket types to your organization's hardware naming conventions and using instance names that reflect the physical hardware names. Your naming practice can make it easier to administer the cluster and troubleshoot problems when an OSD or other hardware malfunctions and the administrator needs remote or physical access to the host or other hardware. In the following example, the bucket hierarchy has four leaf buckets ( osd 1-4 ), two node buckets ( host 1-2 ) and one rack node ( rack 1 ). Since leaf nodes reflect storage devices declared under the devices list at the beginning of the CRUSH map, there is no need to declare them as bucket instances. The second lowest bucket type in the hierarchy usually aggregates the devices; that is, it is usually the computer containing the storage media, and uses whatever term administrators prefer to describe it, such as "node", "computer", "server," "host", "machine", and so on. In high density environments, it is increasingly common to see multiple hosts/nodes per card and per chassis. Make sure to account for card and chassis failure too, for example, the need to pull a card or chassis if a node fails can result in bringing down numerous hosts/nodes and their OSDs. When declaring a bucket instance, specify its type, give it a unique name as a string, assign it an optional unique ID expressed as a negative integer, specify a weight relative to the total capacity or capability of its items, specify the bucket algorithm such as straw2 , and the hash that is usually 0 reflecting hash algorithm rjenkins1 . A bucket can have one or more items. The items can consist of node buckets or leaves. Items can have a weight that reflects the relative weight of the item. 2.1.1. Placing Data Dynamically Ceph Clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph Clients: By distributing CRUSH maps to Ceph clients, CRUSH empowers Ceph clients to communicate with OSDs directly. This means that Ceph clients avoid a centralized object look-up table that could act as a single point of failure, a performance bottleneck, a connection limitation at a centralized look-up server and a physical limit to the storage cluster's scalability. Ceph OSDs: By distributing CRUSH maps to Ceph OSDs, Ceph empowers OSDs to handle replication, backfilling and recovery. This means that the Ceph OSDs handle storage of object replicas (or coding chunks) on behalf of the Ceph client. It also means that Ceph OSDs know enough about the cluster to re-balance the cluster (backfilling) and recover from failures dynamically. 2.1.2. Establishing Failure Domains Having multiple object replicas or M erasure coding chunks helps prevent data loss, but it is not sufficient to address high availability. By reflecting the underlying physical organization of the Ceph Storage Cluster, CRUSH can model-and thereby address-potential sources of correlated device failures. By encoding the cluster's topology into the cluster map, CRUSH placement policies can separate object replicas or erasure coding chunks across different failure domains while still maintaining the desired pseudo-random distribution. For example, to address the possibility of concurrent failures, it might be desirable to ensure that data replicas or erasure coding chunks are on devices using different shelves, racks, power supplies, controllers or physical locations. This helps to prevent data loss and allows the cluster to operate in a degraded state. 2.1.3. Establishing Performance Domains Ceph can support multiple hierarchies to separate one type of hardware performance profile from another type of hardware performance profile. For example, CRUSH can create one hierarchy for hard disk drives and another hierarchy for SSDs. Performance domains- hierarchies that take the performance profile of the underlying hardware into consideration- are increasingly popular due to the need to support different performance characteristics. Operationally, these are just CRUSH maps with more than one root type bucket. Use case examples include: Object Storage: Ceph hosts that serve as an object storage back end for S3 and Swift interfaces might take advantage of less expensive storage media such as SATA drives that might not be suitable for VMs- reducing the cost per gigabyte for object storage, while separating more economical storage hosts from more performing ones intended for storing volumes and images on cloud platforms. HTTP tends to be the bottleneck in object storage systems. Cold Storage : Systems designed for cold storage- infrequently accessed data, or data retrieval with relaxed performance requirements- might take advantage of less expensive storage media and erasure coding. However, erasure coding might require a bit of additional RAM and CPU, and thus differ in RAM and CPU requirements from a host used for object storage or VMs. SSD-backed Pools: SSDs are expensive, but they provide significant advantages over hard disk drives. SSDs have no seek time and they provide high total throughput. In addition to using SSDs for journaling, a cluster can support SSD-backed pools. Common use cases include high performance SSD pools. For example, it is possible to map the .rgw.buckets.index pool for the Ceph Object Gateway to SSDs instead of SATA drives. A CRUSH map supports the notion of a device class . Ceph can discover aspects of a storage device and automatically assign a class such as hdd , ssd or nvme . However, CRUSH is not limited to these defaults. For example, CRUSH hierarchies might also be used to separate different types of workloads. For example, an SSD might be used for a journal or write-ahead log, a bucket index or for raw object storage. CRUSH can support different device classes, such as ssd-bucket-index or ssd-object-storage so Ceph does not use the same storage media for different workloads- making performance more predictable and consistent. Behind the scenes, Ceph generates a crush root for each device-class. These roots should only be modified by setting or changing device classes on OSDs. You can view the generated roots using the following command: Example 2.1.4. Using Different Device Classes To create performance domains, use device classes and a single CRUSH hierarchy. There is no need to manually edit the CRUSH map in this particular operation. Simply add OSDs to the CRUSH hierarchy, then do the following: Add a class to each device: Syntax Example Create rules to use the devices: Syntax Example Set pools to use the rules: Syntax Example 2.2. CRUSH Hierarchies The CRUSH map is a directed acyclic graph, so it can accommodate multiple hierarchies, for example, performance domains. The easiest way to create and modify a CRUSH hierarchy is with the Ceph CLI; however, you can also decompile a CRUSH map, edit it, recompile it, and activate it. When declaring a bucket instance with the Ceph CLI, you must specify its type and give it a unique string name. Ceph automatically assigns a bucket ID, sets the algorithm to straw2 , sets the hash to 0 reflecting rjenkins1 and sets a weight. When modifying a decompiled CRUSH map, assign the bucket a unique ID expressed as a negative integer (optional), specify a weight relative to the total capacity/capability of its item(s), specify the bucket algorithm (usually straw2 ), and the hash (usually 0 , reflecting hash algorithm rjenkins1 ). A bucket can have one or more items. The items can consist of node buckets (for example, racks, rows, hosts) or leaves (for example, an OSD disk). Items can have a weight that reflects the relative weight of the item. When modifying a decompiled CRUSH map, you can declare a node bucket with the following syntax: For example, using the diagram above, we would define two host buckets and one rack bucket. The OSDs are declared as items within the host buckets: Note In the foregoing example, note that the rack bucket does not contain any OSDs. Rather it contains lower level host buckets, and includes the sum total of their weight in the item entry. 2.2.1. CRUSH Location A CRUSH location is the position of an OSD in terms of the CRUSH map's hierarchy. When you express a CRUSH location on the command line interface, a CRUSH location specifier takes the form of a list of name/value pairs describing the OSD's position. For example, if an OSD is in a particular row, rack, chassis and host, and is part of the default CRUSH tree, its crush location could be described as: Note: The order of the keys does not matter. The key name (left of = ) must be a valid CRUSH type . By default these include root , datacenter , room , row , pod , pdu , rack , chassis and host . You might edit the CRUSH map to change the types to suit your needs. You do not need to specify all the buckets/keys. For example, by default, Ceph automatically sets a ceph-osd daemon's location to be root=default host={HOSTNAME} (based on the output from hostname -s ). 2.2.2. Adding a Bucket To add a bucket instance to your CRUSH hierarchy, specify the bucket name and its type. Bucket names must be unique in the CRUSH map. Important Using colons (:) in bucket names is not supported. Add an instance of each bucket type you need for your hierarchy. The following example demonstrates adding buckets for a row with a rack of SSD hosts and a rack of hosts for object storage. Once you have completed these steps, view your tree. Notice that the hierarchy remains flat. You must move your buckets into a hierarchical position after you add them to the CRUSH map. 2.2.3. Moving a Bucket When you create your initial cluster, Ceph has a default CRUSH map with a root bucket named default and your initial OSD hosts appear under the default bucket. When you add a bucket instance to your CRUSH map, it appears in the CRUSH hierarchy, but it does not necessarily appear under a particular bucket. To move a bucket instance to a particular location in your CRUSH hierarchy, specify the bucket name and its type. For example: Once you have completed these steps, you can view your tree. Note You can also use ceph osd crush create-or-move to create a location while moving an OSD. 2.2.4. Removing a Bucket To remove a bucket instance from your CRUSH hierarchy, specify the bucket name. For example: Or: Note The bucket must be empty in order to remove it. If you are removing higher level buckets (for example, a root like default ), check to see if a pool uses a CRUSH rule that selects that bucket. If so, you need to modify your CRUSH rules; otherwise, peering fails. 2.2.5. Bucket Algorithms When you create buckets using the Ceph CLI, Ceph sets the algorithm to straw2 by default. Ceph supports four bucket algorithms, each representing a tradeoff between performance and reorganization efficiency. If you are unsure of which bucket type to use, we recommend using a straw2 bucket. The bucket algorithms are: Uniform: Uniform buckets aggregate devices with exactly the same weight. For example, when firms commission or decommission hardware, they typically do so with many machines that have exactly the same physical configuration (for example, bulk purchases). When storage devices have exactly the same weight, you can use the uniform bucket type, which allows CRUSH to map replicas into uniform buckets in constant time. With non-uniform weights, you should use another bucket algorithm. List : List buckets aggregate their content as linked lists. Based on the RUSH (Replication Under Scalable Hashing) P algorithm, a list is a natural and intuitive choice for an expanding cluster : either an object is relocated to the newest device with some appropriate probability, or it remains on the older devices as before. The result is optimal data migration when items are added to the bucket. Items removed from the middle or tail of the list, however, can result in a significant amount of unnecessary movement, making list buckets most suitable for circumstances in which they never, or very rarely shrink . Tree : Tree buckets use a binary search tree. They are more efficient than listing buckets when a bucket contains a larger set of items. Based on the RUSH (Replication Under Scalable Hashing) R algorithm, tree buckets reduce the placement time to zero (log n ), making them suitable for managing much larger sets of devices or nested buckets. Straw2 (default): List and Tree buckets use a divide and conquer strategy in a way that either gives certain items precedence, for example, those at the beginning of a list or obviates the need to consider entire subtrees of items at all. That improves the performance of the replica placement process, but can also introduce suboptimal reorganization behavior when the contents of a bucket change due an addition, removal, or re-weighting of an item. The straw2 bucket type allows all items to fairly "compete" against each other for replica placement through a process analogous to a draw of straws. 2.3. Ceph OSDs in CRUSH Once you have a CRUSH hierarchy for the OSDs, add OSDs to the CRUSH hierarchy. You can also move or remove OSDs from an existing hierarchy. The Ceph CLI usage has the following values: id Description The numeric ID of the OSD. Type Integer Required Yes Example 0 name Description The full name of the OSD. Type String Required Yes Example osd.0 weight Description The CRUSH weight for the OSD. Type Double Required Yes Example 2.0 root Description The name of the root bucket of the hierarchy or tree in which the OSD resides. Type Key-value pair. Required Yes Example root=default , root=replicated_rule , and so on bucket-type Description One or more name-value pairs, where the name is the bucket type and the value is the bucket's name. You can specify a CRUSH location for an OSD in the CRUSH hierarchy. Type Key-value pairs. Required No Example datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1 2.3.1. Viewing OSDs in CRUSH The ceph osd crush tree command prints CRUSH buckets and items in a tree view. Use this command to determine a list of OSDs in a particular bucket. It will print output similar to ceph osd tree . To return additional details, execute the following: The command returns an output similar to the following: 2.3.2. Adding an OSD to CRUSH Adding a Ceph OSD to a CRUSH hierarchy is the final step before you might start an OSD (rendering it up and in ) and Ceph assigns placement groups to the OSD. Red Hat Ceph Storage 5 and later releases uses Ceph Orchestrator to deploy OSDs in the CRUSH hierarchy. For more information, see Deploying Ceph OSDs on all available devices . To add the OSDs manually, you must prepare a Ceph OSD before you add it to the CRUSH hierarchy. For example creating a Ceph OSD on a single node: The CRUSH hierarchy is notional, so the ceph osd crush add command allows you to add OSDs to the CRUSH hierarchy wherever you wish. The location you specify should reflect its actual location. If you specify at least one bucket, the command places the OSD into the most specific bucket you specify, and it moves that bucket underneath any other buckets you specify. To add an OSD to a CRUSH hierarchy: Important If you specify only the root bucket, the command attaches the OSD directly to the root. However, CRUSH rules expect OSDs to be inside of hosts or chassis, and host or chassis should be inside of other buckets reflecting your cluster topology. The following example adds osd.0 to the hierarchy: Note You can also use ceph osd crush set or ceph osd crush create-or-move to add an OSD to the CRUSH hierarchy. 2.3.3. Moving an OSD within a CRUSH Hierarchy For Red Hat Ceph Storage 5 and later releases, you can set the initial CRUSH location for a host using the Ceph Orchestrator For more information, see Setting the initial CRUSH location of host . For replacing the OSDs, you can see Replacing the OSDs using the Ceph Orchestrator If the storage cluster topology changes, you can manually move an OSD in the CRUSH hierarchy to reflect its actual location. Important Moving an OSD in the CRUSH hierarchy means that Ceph will recompute which placement groups get assigned to the OSD, potentially resulting in significant redistribution of data. To move an OSD within the CRUSH hierarchy: Note You can also use ceph osd crush create-or-move to move an OSD within the CRUSH hierarchy. 2.3.4. Removing an OSD from a CRUSH Hierarchy Removing an OSD from a CRUSH hierarchy is the first step when you want to remove an OSD from your cluster. When you remove the OSD from the CRUSH map, CRUSH recomputes which OSDs get the placement groups and data re-balances accordingly. See Adding/Removing OSDs for additional details. For Red Hat Ceph Storage 5 and later releases, you can use the Ceph Orchestrator to remove OSDs from the storage cluster. For more information, see Removing the OSD daemons using the Ceph Orchestrator . To manually remove an OSD from the CRUSH map of a running cluster, execute the following: 2.4. Device Classes Ceph's CRUSH map provides extraordinary flexibility in controlling data placement. This is one of Ceph's greatest strengths. Early Ceph deployments used hard disk drives almost exclusively. Today, Ceph clusters are frequently built with multiple types of storage devices: HDD, SSD, NVMe, or even various classes of the foregoing. For example, it is common in Ceph Object Gateway deployments to have storage policies where clients can store data on slower HDDs and other storage policies for storing data on fast SSDs. Ceph Object Gateway deployments might even have a pool backed by fast SSDs for bucket indices. Additionally, OSD nodes also frequently have SSDs used exclusively for journals or write-ahead logs that do NOT appear in the CRUSH map. These complex hardware scenarios historically required manually editing the CRUSH map, which can be time-consuming and tedious. It is not required to have different CRUSH hierarchies for different classes of storage devices. CRUSH rules work in terms of the CRUSH hierarchy. However, if different classes of storage devices reside in the same hosts, the process becomes more complicated- requiring users to create multiple CRUSH hierarchies for each class of device, and then disable the osd crush update on start option that automates much of the CRUSH hierarchy management. Device classes eliminate this tediousness by telling the CRUSH rule what class of device to use, dramatically simplifying CRUSH management tasks. Note The ceph osd tree command has a column reflecting a device class. The following sections detail device class usage. See Using Different Device Classes and CRUSH Storage Strategy Examples for additional examples. 2.4.1. Setting a Device Class To set a device class for an OSD, execute the following: For example: Note Ceph might assign a class to a device automatically. However, class names are simply arbitrary strings. There is no requirement to adhere to hdd , ssd or nvme . In the foregoing example, a device class named bucket-index might indicate an SSD device that a Ceph Object Gateway pool uses exclusively bucket index workloads. To change a device class that was already set, use ceph osd crush rm-device-class first. 2.4.2. Removing a Device Class To remove a device class for an OSD, execute the following: For example: 2.4.3. Renaming a Device Class To rename a device class for all OSDs that use that class, execute the following: For example: 2.4.4. Listing Device Classes To list device classes in the CRUSH map, execute the following: The output will look something like this: 2.4.5. Listing OSDs of a Device Class To list all OSDs that belong to a particular class, execute the following: For example: The output is simply a list of OSD numbers. For example: 2.4.6. Listing CRUSH Rules by Class To list all crush rules that reference the same class, execute the following: For example: 2.5. CRUSH Weights The CRUSH algorithm assigns a weight value in terabytes (by convention) per OSD device with the objective of approximating a uniform probability distribution for write requests that assign new data objects to PGs and PGs to OSDs. For this reason, as a best practice, we recommend creating CRUSH hierarchies with devices of the same type and size, and assigning the same weight. We also recommend using devices with the same I/O and throughput characteristics so that you will also have uniform performance characteristics in your CRUSH hierarchy, even though performance characteristics do not affect data distribution. Since using uniform hardware is not always practical, you might incorporate OSD devices of different sizes and use a relative weight so that Ceph will distribute more data to larger devices and less data to smaller devices. 2.5.1. Set an OSD's Weight in Terabytes To set an OSD CRUSH weight in Terabytes within the CRUSH map, execute the following command Where: name Description The full name of the OSD. Type String Required Yes Example osd.0 weight Description The CRUSH weight for the OSD. This should be the size of the OSD in Terabytes, where 1.0 is 1 Terabyte. Type Double Required Yes Example 2.0 This setting is used when creating an OSD or adjusting the CRUSH weight immediately after adding the OSD. It usually does not change over the life of the OSD. 2.5.2. Set a Bucket's OSD Weights Using ceph osd crush reweight can be time-consuming. You can set (or reset) all Ceph OSD weights under a bucket (row, rack, node, and so on) by executing: Where, name is the name of the CRUSH bucket. 2.5.3. Set an OSD's in Weight For the purposes of ceph osd in and ceph osd out , an OSD is either in the cluster or out of the cluster. That is how a monitor records an OSD's status. However, even though an OSD is in the cluster, it might be experiencing a malfunction such that you do not want to rely on it as much until you fix it (for example, replace a storage drive, change out a controller, and so on). You can increase or decrease the in weight of a particular OSD (that is, without changing its weight in Terabytes) by executing: Where: id is the OSD number. weight is a range from 0.0-1.0, where 0 is not in the cluster (that is, it does not have any PGs assigned to it) and 1.0 is in the cluster (that is, the OSD receives the same number of PGs as other OSDs). 2.5.4. Set an OSD's Weight by Utilization CRUSH is designed to approximate a uniform probability distribution for write requests that assign new data objects PGs and PGs to OSDs. However, a cluster might become imbalanced anyway. This can happen for a number of reasons. For example: Multiple Pools: You can assign multiple pools to a CRUSH hierarchy, but the pools might have different numbers of placement groups, size (number of replicas to store), and object size characteristics. Custom Clients: Ceph clients such as block device, object gateway and filesystem share data from their clients and stripe the data as objects across the cluster as uniform-sized smaller RADOS objects. So except for the foregoing scenario, CRUSH usually achieves its goal. However, there is another case where a cluster can become imbalanced: namely, using librados to store data without normalizing the size of objects. This scenario can lead to imbalanced clusters (for example, storing 100 1 MB objects and 10 4 MB objects will make a few OSDs have more data than the others). Probability: A uniform distribution will result in some OSDs with more PGs and some with less. For clusters with a large number of OSDs, the statistical outliers will be further out. You can reweight OSDs by utilization by executing the following: For example: Where: threshold is a percentage of utilization such that OSDs facing higher data storage loads will receive a lower weight and thus fewer PGs assigned to them. The default value is 120 , reflecting 120%. Any value from 100+ is a valid threshold. Optional. weight_change_amount is the amount to change the weight. Valid values are greater than 0.0 - 1.0 . The default value is 0.05 . Optional. number_of_OSDs is the maximum number of OSDs to reweight. For large clusters, limiting the number of OSDs to reweight prevents significant rebalancing. Optional. no-increasing is off by default. Increasing the osd weight is allowed when using the reweight-by-utilization or test-reweight-by-utilization commands. If this option is used with these commands, it prevents the OSD weight from increasing, even if the OSD is underutilized. Optional. Important Executing reweight-by-utilization is recommended and somewhat inevitable for large clusters. Utilization rates might change over time, and as your cluster size or hardware changes, the weightings might need to be updated to reflect changing utilization. If you elect to reweight by utilization, you might need to re-run this command as utilization, hardware or cluster size change. Executing this or other weight commands that assign a weight will override the weight assigned by this command (for example, osd reweight-by-utilization , osd crush weight , osd weight , in or out ). 2.5.5. Set an OSD's Weight by PG Distribution In CRUSH hierarchies with a smaller number of OSDs, it's possible for some OSDs to get more PGs than other OSDs, resulting in a higher load. You can reweight OSDs by PG distribution to address this situation by executing the following: Where: poolname is the name of the pool. Ceph will examine how the pool assigns PGs to OSDs and reweight the OSDs according to this pool's PG distribution. Note that multiple pools could be assigned to the same CRUSH hierarchy. Reweighting OSDs according to one pool's distribution could have unintended effects for other pools assigned to the same CRUSH hierarchy if they do not have the same size (number of replicas) and PGs. 2.5.6. Recalculate a CRUSH Tree's Weights CRUSH tree buckets should be the sum of their leaf weights. If you manually edit the CRUSH map weights, you should execute the following to ensure that the CRUSH bucket tree accurately reflects the sum of the leaf OSDs under the bucket. 2.6. Primary Affinity When a Ceph Client reads or writes data, it always contacts the primary OSD in the acting set. For set [2, 3, 4] , osd.2 is the primary. Sometimes an OSD is not well suited to act as a primary compared to other OSDs (for example, it has a slow disk or a slow controller). To prevent performance bottlenecks (especially on read operations) while maximizing utilization of your hardware, you can set a Ceph OSD's primary affinity so that CRUSH is less likely to use the OSD as a primary in an acting set. : Primary affinity is 1 by default ( that is, an OSD might act as a primary). You might set the OSD primary range from 0-1 , where 0 means that the OSD might NOT be used as a primary and 1 means that an OSD might be used as a primary. When the weight is < 1 , it is less likely that CRUSH will select the Ceph OSD Daemon to act as a primary. 2.7. CRUSH Rules CRUSH rules define how a Ceph client selects buckets and the primary OSD within them to store objects, and how the primary OSD selects buckets and the secondary OSDs to store replicas or coding chunks. For example, you might create a rule that selects a pair of target OSDs backed by SSDs for two object replicas, and another rule that selects three target OSDs backed by SAS drives in different data centers for three replicas. A rule takes the following form: id Description A unique whole number for identifying the rule. Purpose A component of the rule mask. Type Integer Required Yes Default 0 type Description Describes a rule for either a storage drive replicated or erasure coded. Purpose A component of the rule mask. Type String Required Yes Default replicated Valid Values Currently only replicated min_size Description If a pool makes fewer replicas than this number, CRUSH will not select this rule. Type Integer Purpose A component of the rule mask. Required Yes Default 1 max_size Description If a pool makes more replicas than this number, CRUSH will not select this rule. Type Integer Purpose A component of the rule mask. Required Yes Default 10 step take <bucket-name> [class <class-name>] Description Takes a bucket name, and begins iterating down the tree. Purpose A component of the rule. Required Yes Example step take data step take data class ssd step choose firstn <num> type <bucket-type> Description Selects the number of buckets of the given type. The number is usually the number of replicas in the pool (that is, pool size). If <num> == 0 , choose pool-num-replicas buckets (all available). If <num> > 0 && < pool-num-replicas , choose that many buckets. If <num> < 0 , it means pool-num-replicas - {num} . Purpose A component of the rule. Prerequisite Follow step take or step choose . Example step choose firstn 1 type row step chooseleaf firstn <num> type <bucket-type> Description Selects a set of buckets of {bucket-type} and chooses a leaf node from the subtree of each bucket in the set of buckets. The number of buckets in the set is usually the number of replicas in the pool (that is, pool size). If <num> == 0 , choose pool-num-replicas buckets (all available). If <num> > 0 && < pool-num-replicas , choose that many buckets. If <num> < 0 , it means pool-num-replicas - <num> . Purpose A component of the rule. Usage removes the need to select a device using two steps. Prerequisite Follows step take or step choose . Example step chooseleaf firstn 0 type row step emit Description Outputs the current value and empties the stack. Typically used at the end of a rule, but might also be used to pick from different trees in the same rule. Purpose A component of the rule. Prerequisite Follows step choose . Example step emit firstn versus indep Description Controls the replacement strategy CRUSH uses when OSDs are marked down in the CRUSH map. If this rule is to be used with replicated pools it should be firstn and if it is for erasure-coded pools it should be indep . Example You have a PG stored on OSDs 1, 2, 3, 4, 5 in which 3 goes down.. In the first scenario, with the firstn mode, CRUSH adjusts its calculation to select 1 and 2, then selects 3 but discovers it is down, so it retries and selects 4 and 5, and then goes on to select a new OSD 6. The final CRUSH mapping change is from 1, 2, 3, 4, 5 to 1, 2, 4, 5, 6 . In the second scenario, with indep mode on an erasure-coded pool, CRUSH attempts to select the failed OSD 3, tries again and picks out 6, for a final transformation from 1, 2, 3, 4, 5 to 1, 2, 6, 4, 5 . Important A given CRUSH rule can be assigned to multiple pools, but it is not possible for a single pool to have multiple CRUSH rules. 2.7.1. Listing Rules To list CRUSH rules from the command line, execute the following: 2.7.2. Dumping a Rule To dump the contents of a specific CRUSH rule, execute the following: 2.7.3. Adding a Simple Rule To add a CRUSH rule, you must specify a rule name, the root node of the hierarchy you wish to use, the type of bucket you want to replicate across (for example, 'rack', 'row', and so on and the mode for choosing the bucket. Ceph will create a rule with chooseleaf and 1 bucket of the type you specify. For example: Create the following rule: 2.7.4. Adding a Replicated Rule To create a CRUSH rule for a replicated pool, execute the following: Where: <name> : The name of the rule. <root> : The root of the CRUSH hierarchy. <failure-domain> : The failure domain. For example: host or rack . <class> : The storage device class. For example: hdd or ssd . For example: 2.7.5. Adding an Erasure Code Rule To add a CRUSH rule for use with an erasure coded pool, you might specify a rule name and an erasure code profile. Syntax Example Additional Resources See Erasure code profiles for more details. 2.7.6. Removing a Rule To remove a rule, execute the following and specify the CRUSH rule name: 2.8. CRUSH Tunables The Ceph project has grown exponentially with many changes and many new features. Beginning with the first commercially supported major release of Ceph, v0.48 (Argonaut), Ceph provides the ability to adjust certain parameters of the CRUSH algorithm, that is, the settings are not frozen in the source code. A few important points to consider: Adjusting CRUSH values might result in the shift of some PGs between storage nodes. If the Ceph cluster is already storing a lot of data, be prepared for some fraction of the data to move. The ceph-osd and ceph-mon daemons will start requiring the feature bits of new connections as soon as they receive an updated map. However, already-connected clients are effectively grandfathered in, and will misbehave if they do not support the new feature. Make sure when you upgrade your Ceph Storage Cluster daemons that you also update your Ceph clients. If the CRUSH tunables are set to non-legacy values and then later changed back to the legacy values, ceph-osd daemons will not be required to support the feature. However, the OSD peering process requires examining and understanding old maps. Therefore, you should not run old versions of the ceph-osd daemon if the cluster has previously used non-legacy CRUSH values, even if the latest version of the map has been switched back to using the legacy defaults. 2.8.1. Tuning CRUSH Before you tune CRUSH, you should ensure that all Ceph clients and all Ceph daemons use the same version. If you have recently upgraded, ensure that you have restarted daemons and reconnected clients. The simplest way to adjust the CRUSH tunables is by changing to a known profile. Those are: legacy : The legacy behavior from v0.47 (pre-Argonaut) and earlier. argonaut : The legacy values supported by v0.48 (Argonaut) release. bobtail : The values supported by the v0.56 (Bobtail) release. firefly : The values supported by the v0.80 (Firefly) release. hammer : The values supported by the v0.94 (Hammer) release. jewel : The values supported by the v10.0.2 (Jewel) release. optimal : The current best values. default : The current default values for a new cluster. You can select a profile on a running cluster with the command: Note This might result in some data movement. Generally, you should set the CRUSH tunables after you upgrade, or if you receive a warning. Starting with version v0.74, Ceph will issue a health warning if the CRUSH tunables are not set to their optimal values, the optimal values are the default as of v0.73. To make this warning go away, you have two options: Adjust the tunables on the existing cluster. Note that this will result in some data movement (possibly as much as 10%). This is the preferred route, but should be taken with care on a production cluster where the data movement might affect performance. You can enable optimal tunables with: If things go poorly (for example, too much load) and not very much progress has been made, or there is a client compatibility problem (old kernel cephfs or rbd clients, or pre-bobtail librados clients), you can switch back to an earlier profile: For example, to restore the pre-v0.48 (Argonaut) values, execute: You can make the warning go away without making any changes to CRUSH by adding the following option to the [mon] section of the ceph.conf file: For the change to take effect, restart the monitors, or apply the option to running monitors with: 2.8.2. Tuning CRUSH, The Hard Way If you can ensure that all clients are running recent code, you can adjust the tunables by extracting the CRUSH map, modifying the values, and reinjecting it into the cluster. Extract the latest CRUSH map: Adjust tunables. These values appear to offer the best behavior for both large and small clusters we tested with. You will need to additionally specify the --enable-unsafe-tunables argument to crushtool for this to work. Please use this option with extreme care.: Reinject modified map: 2.8.3. Legacy Values For reference, the legacy values for the CRUSH tunables can be set with: Again, the special --enable-unsafe-tunables option is required. Further, as noted above, be careful running old versions of the ceph-osd daemon after reverting to legacy values as the feature bit is not perfectly enforced. 2.9. Editing a CRUSH Map Generally, modifying your CRUSH map at runtime with the Ceph CLI is more convenient than editing the CRUSH map manually. However, there are times when you might choose to edit it, such as changing the default bucket types, or using a bucket algorithm other than straw2 . To edit an existing CRUSH map: Get the CRUSH map . Decompile the CRUSH map. Edit at least one of the devices, and Buckets and rules. Recompile the CRUSH map. Set the CRUSH map . To activate a CRUSH Map rule for a specific pool, identify the common rule number and specify that rule number for the pool when creating the pool. 2.9.1. Getting a CRUSH Map To get the CRUSH map for your cluster, execute the following: Ceph will output (-o) a compiled CRUSH map to the file name you specified. Since the CRUSH map is in a compiled form, you must decompile it first before you can edit it. 2.9.2. Decompiling a CRUSH Map To decompile a CRUSH map, execute the following: Ceph will decompile (-d) the compiled CRUSH map and output (-o) it to the file name you specified. 2.9.3. Compiling a CRUSH Map To compile a CRUSH map, execute the following: Ceph will store a compiled CRUSH map to the file name you specified. 2.9.4. Setting a CRUSH Map To set the CRUSH map for your cluster, execute the following: Ceph will input the compiled CRUSH map of the file name you specified as the CRUSH map for the cluster. 2.10. CRUSH Storage Strategy Examples If you want to have most pools default to OSDs backed by large hard drives, but have some pools mapped to OSDs backed by fast solid-state drives (SSDs). CRUSH can handle these scenarios easily. Use device classes. The process is simple to add a class to each device. For example: Then, create rules to use the devices. Finally, set pools to use the rules. There is no need to manually edit the CRUSH map, because one hierarchy can serve multiple classes of devices.
[ "ceph osd crush tree --show-shadow ID CLASS WEIGHT TYPE NAME -24 ssd 4.54849 root default~ssd -19 ssd 0.90970 host ceph01~ssd 8 ssd 0.90970 osd.8 -20 ssd 0.90970 host ceph02~ssd 7 ssd 0.90970 osd.7 -21 ssd 0.90970 host ceph03~ssd 3 ssd 0.90970 osd.3 -22 ssd 0.90970 host ceph04~ssd 5 ssd 0.90970 osd.5 -23 ssd 0.90970 host ceph05~ssd 6 ssd 0.90970 osd.6 -2 hdd 50.94173 root default~hdd -4 hdd 7.27739 host ceph01~hdd 10 hdd 7.27739 osd.10 -12 hdd 14.55478 host ceph02~hdd 0 hdd 7.27739 osd.0 12 hdd 7.27739 osd.12 -6 hdd 14.55478 host ceph03~hdd 4 hdd 7.27739 osd.4 11 hdd 7.27739 osd.11 -10 hdd 7.27739 host ceph04~hdd 1 hdd 7.27739 osd.1 -8 hdd 7.27739 host ceph05~hdd 2 hdd 7.27739 osd.2 -1 55.49022 root default -3 8.18709 host ceph01 10 hdd 7.27739 osd.10 8 ssd 0.90970 osd.8 -11 15.46448 host ceph02 0 hdd 7.27739 osd.0 12 hdd 7.27739 osd.12 7 ssd 0.90970 osd.7 -5 15.46448 host ceph03 4 hdd 7.27739 osd.4 11 hdd 7.27739 osd.11 3 ssd 0.90970 osd.3 -9 8.18709 host ceph04 1 hdd 7.27739 osd.1 5 ssd 0.90970 osd.5 -7 8.18709 host ceph05 2 hdd 7.27739 osd.2 6 ssd 0.90970 osd.6", "ceph osd crush set-device-class CLASS OSD_ID [ OSD_ID ]", "ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5 ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7", "ceph osd crush rule create-replicated RULE_NAME ROOT FAILURE_DOMAIN_TYPE CLASS", "ceph osd crush rule create-replicated cold default host hdd ceph osd crush rule create-replicated hot default host ssd", "ceph osd pool set POOL_NAME crush_rule RULE_NAME", "ceph osd pool set cold_tier crush_rule cold ceph osd pool set hot_tier crush_rule hot", "[bucket-type] [bucket-name] { id [a unique negative numeric ID] weight [the relative capacity/capability of the item(s)] alg [the bucket type: uniform | list | tree | straw2 ] hash [the hash type: 0 by default] item [item-name] weight [weight] }", "host node1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 } host node2 { id -2 alg straw2 hash 0 item osd.2 weight 1.00 item osd.3 weight 1.00 } rack rack1 { id -3 alg straw2 hash 0 item node1 weight 2.00 item node2 weight 2.00 }", "root=default row=a rack=a2 chassis=a2a host=a2a1", "ceph osd crush add-bucket {name} {type}", "ceph osd crush add-bucket ssd-row1 row ceph osd crush add-bucket ssd-row1-rack1 rack ceph osd crush add-bucket ssd-row1-rack1-host1 host ceph osd crush add-bucket ssd-row1-rack1-host2 host ceph osd crush add-bucket hdd-row1 row ceph osd crush add-bucket hdd-row1-rack2 rack ceph osd crush add-bucket hdd-row1-rack1-host1 host ceph osd crush add-bucket hdd-row1-rack1-host2 host ceph osd crush add-bucket hdd-row1-rack1-host3 host ceph osd crush add-bucket hdd-row1-rack1-host4 host", "ceph osd tree", "ceph osd crush move ssd-row1 root=ssd-root ceph osd crush move ssd-row1-rack1 row=ssd-row1 ceph osd crush move ssd-row1-rack1-host1 rack=ssd-row1-rack1 ceph osd crush move ssd-row1-rack1-host2 rack=ssd-row1-rack1", "ceph osd tree", "ceph osd crush remove {bucket-name}", "ceph osd crush rm {bucket-name}", "ceph osd crush tree -f json-pretty", "[ { \"id\": -2, \"name\": \"ssd\", \"type\": \"root\", \"type_id\": 10, \"items\": [ { \"id\": -6, \"name\": \"dell-per630-11-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 6, \"name\": \"osd.6\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] }, { \"id\": -7, \"name\": \"dell-per630-12-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 7, \"name\": \"osd.7\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] }, { \"id\": -8, \"name\": \"dell-per630-13-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 8, \"name\": \"osd.8\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] } ] }, { \"id\": -1, \"name\": \"default\", \"type\": \"root\", \"type_id\": 10, \"items\": [ { \"id\": -3, \"name\": \"dell-per630-11\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 0, \"name\": \"osd.0\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 3, \"name\": \"osd.3\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] }, { \"id\": -4, \"name\": \"dell-per630-12\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 1, \"name\": \"osd.1\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 4, \"name\": \"osd.4\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] }, { \"id\": -5, \"name\": \"dell-per630-13\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 2, \"name\": \"osd.2\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 5, \"name\": \"osd.5\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] } ] } ]", "ceph orch daemon add osd {host}:{device},[{device}]", "ceph osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]", "ceph osd crush add osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1", "ceph osd crush set {id-or-name} {weight} root={pool-name} [{bucket-type}={bucket-name} ...]", "ceph osd crush remove {name}", "ceph osd crush set-device-class <class> <osdId> [<osdId>...]", "ceph osd crush set-device-class hdd osd.0 osd.1 ceph osd crush set-device-class ssd osd.2 osd.3 ceph osd crush set-device-class bucket-index osd.4", "ceph osd crush rm-device-class <class> <osdId> [<osdId>...]", "ceph osd crush rm-device-class hdd osd.0 osd.1 ceph osd crush rm-device-class ssd osd.2 osd.3 ceph osd crush rm-device-class bucket-index osd.4", "ceph osd crush class rename <oldName> <newName>", "ceph osd crush class rename hdd sas15k", "ceph osd crush class ls", "[ \"hdd\", \"ssd\", \"bucket-index\" ]", "ceph osd crush class ls-osd <class>", "ceph osd crush class ls-osd hdd", "0 1 2 3 4 5 6", "ceph osd crush rule ls-by-class <class>", "ceph osd crush rule ls-by-class hdd", "ceph osd crush reweight {name} {weight}", "osd crush reweight-subtree <name>", "ceph osd reweight {id} {weight}", "ceph osd reweight-by-utilization [threshold] [weight_change_amount] [number_of_OSDs] [--no-increasing]", "ceph osd test-reweight-by-utilization 110 .5 4 --no-increasing", "osd reweight-by-pg <poolname>", "osd crush reweight-all", "ceph osd primary-affinity <osd-id> <weight>", "rule <rulename> { id <unique number> type [replicated | erasure] min_size <min-size> max_size <max-size> step take <bucket-type> [class <class-name>] step [choose|chooseleaf] [firstn|indep] <N> <bucket-type> step emit }", "ceph osd crush rule list ceph osd crush rule ls", "ceph osd crush rule dump {name}", "ceph osd crush rule create-simple {rulename} {root} {bucket-type} {firstn|indep}", "ceph osd crush rule create-simple deleteme default host firstn", "{ \"id\": 1, \"rule_name\": \"deleteme\", \"type\": 1, \"min_size\": 1, \"max_size\": 10, \"steps\": [ { \"op\": \"take\", \"item\": -1, \"item_name\": \"default\"}, { \"op\": \"chooseleaf_firstn\", \"num\": 0, \"type\": \"host\"}, { \"op\": \"emit\"}]}", "ceph osd crush rule create-replicated <name> <root> <failure-domain> <class>", "ceph osd crush rule create-replicated fast default host ssd", "ceph osd crush rule create-erasure {rulename} {profilename}", "ceph osd crush rule create-erasure default default", "ceph osd crush rule rm {name}", "ceph osd crush tunables <profile>", "ceph osd crush tunables optimal", "ceph osd crush tunables <profile>", "ceph osd crush tunables legacy", "mon warn on legacy crush tunables = false", "ceph tell mon.\\* injectargs --no-mon-warn-on-legacy-crush-tunables", "ceph osd getcrushmap -o /tmp/crush", "crushtool -i /tmp/crush --set-choose-local-tries 0 --set-choose-local-fallback-tries 0 --set-choose-total-tries 50 -o /tmp/crush.new", "ceph osd setcrushmap -i /tmp/crush.new", "crushtool -i /tmp/crush --set-choose-local-tries 2 --set-choose-local-fallback-tries 5 --set-choose-total-tries 19 --set-chooseleaf-descend-once 0 --set-chooseleaf-vary-r 0 -o /tmp/crush.legacy", "ceph osd getcrushmap -o {compiled-crushmap-filename}", "crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename}", "crushtool -c {decompiled-crush-map-filename} -o {compiled-crush-map-filename}", "ceph osd setcrushmap -i {compiled-crushmap-filename}", "ceph osd crush set-device-class <class> <osdId> [<osdId>] ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5 ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7", "ceph osd crush rule create-replicated <rule-name> <root> <failure-domain-type> <device-class>: ceph osd crush rule create-replicated cold default host hdd ceph osd crush rule create-replicated hot default host ssd", "ceph osd pool set <poolname> crush_rule <rule-name> ceph osd pool set cold crush_rule hdd ceph osd pool set hot crush_rule ssd", "device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class ssd device 3 osd.3 class ssd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class ssd device 7 osd.7 class ssd host ceph-osd-server-1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 item osd.2 weight 1.00 item osd.3 weight 1.00 } host ceph-osd-server-2 { id -2 alg straw2 hash 0 item osd.4 weight 1.00 item osd.5 weight 1.00 item osd.6 weight 1.00 item osd.7 weight 1.00 } root default { id -3 alg straw2 hash 0 item ceph-osd-server-1 weight 4.00 item ceph-osd-server-2 weight 4.00 } rule cold { ruleset 0 type replicated min_size 2 max_size 11 step take default class hdd step chooseleaf firstn 0 type host step emit } rule hot { ruleset 1 type replicated min_size 2 max_size 11 step take default class ssd step chooseleaf firstn 0 type host step emit }" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/storage_strategies_guide/crush_administration
Chapter 5. Creating an instance
Chapter 5. Creating an instance Before you can create an instance, other Red Hat OpenStack Platform (RHOSP) components must be available, such as the flavor, boot source, network, key pair, and security group. These components are used in the creation of an instance and are not available by default. When you create an instance, you choose a boot source that has the bootable operating system that you require for your instance, a flavor that has the hardware profile you require for your instance, the network you want to connect your instance to, and any additional storage you need, such as data volumes and ephemeral storage. Note With API microversion 2.94, if you pass an optional hostname when creating, updating, or rebuilding an instance, you can use Fully Qualified Domain Names (FQDN) when you specify the hostname. When using FQDN, ensure that [api]dhcp_domain configuration option is set to the empty string for the correct FQDN to appear in the hostname field in the metadata API. By default, the hostname is normalized from the display name and all occurrences of "." are removed from the hostname and replaced with "_". Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command, for example: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: 5.1. Prerequisites The required image or volume is available as the boot source: For more information about how to create an image, see Performing operations with the Image service (glance) in the Performing storage operations guide. For more information about how to create a volume, see Creating Block Storage volumes in the Performing storage operations guide. For more information about the options available for the boot source of an instance, see Instance boot source . A flavor is available that specifies the required number of CPUs, memory, and storage capacity. The flavor settings must meet the minimum requirements for disk and memory size specified by your chosen image, otherwise the instance will fail to launch. The required network is available. For more information about how to create a network, see Creating a network in the Managing networking resources guide. 5.2. Creating an instance from an image You can create an instance by using an image as the boot source. Note With API microversion 2.94, if you pass an optional hostname when creating, updating, or rebuilding an instance, you can use Fully Qualified Domain Names (FQDN) when you specify the hostname. When using FQDN, ensure that [api]dhcp_domain configuration option is set to the empty string for the correct FQDN to appear in the hostname field in the metadata API. By default, the hostname is normalized from the display name and all occurrences of "." are removed from the hostname and replaced with "_". Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Retrieve the name or ID of the flavor that has the hardware profile that your instance requires: Note Choose a flavor with sufficient size for the image to successfully boot, otherwise the instance will fail to launch. Retrieve the name or ID of the image that has the software profile that your instance requires: If the image that you require is not available, you can download or create a new image. For information about how to create or download cloud images, see Creating OS images in the Performing storage operations guide. Note If you need to attach more than 26 volumes to your instance, then the image you use to create your instance must have the following properties: hw_scsi_model=virtio-scsi hw_disk_bus=scsi Retrieve the name or ID of the network that you want to connect your instance to: Create your instance: Replace <flavor> with the name or ID of the flavor that you retrieved in step 1. Replace <image> with the name or ID of the image that you retrieved in step 2. Replace <network> with the name or ID of the network that you retrieved in step 3. You can use the --network option more than once to connect your instance to several networks, as required. 5.3. Creating an instance from a bootable volume You can create an instance by using a bootable volume as the boot source. Boot your instance from a volume when you need to improve the availability of the instance data in the event of a failure. Note When you use a block storage volume for your instance disk data, the block storage volume persists for any instance rebuilds, even when an instance is rebuilt with a new image that requests that a new volume is created. Note With API microversion 2.94, if you pass an optional instance hostname when creating, updating, or rebuilding a server, you can use Fully Qualified Domain Names (FQDN) when you specify the hostname. When using FQDN, ensure that [api]dhcp_domain configuration option is set to the empty string for the correct FQDN to appear in the hostname field in the metadata API. By default, the hostname is normalized from the display name and all occurances of "." are removed from the hostname and replaced with "_". Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Retrieve the name or ID of the image that has the software profile that your instance requires: If the image that you require is not available, you can download or create a new image. For information about how to create or download cloud images, see Creating RHEL KVM images in Performing storage operations . Note If you need to attach more than 26 volumes to your instance, then the image you use to create your instance must have the following properties: hw_scsi_model=virtio-scsi hw_disk_bus=scsi Create a bootable volume from the image: Replace <image> with the name or ID of the image to write to the volume, retrieved in step 1. Replace <size_gb> with the size of the volume in GB. Retrieve the name or ID of the flavor that has the hardware profile that your instance requires: Retrieve the name or ID of the network that you want to connect your instance to: Create an instance with the bootable volume: Replace <flavor> with the name or ID of the flavor that you retrieved in step 3. Replace <network> with the name or ID of the network that you retrieved in step 4. You can use the --network option more than once to connect your instance to several networks, as required. 5.4. Creating an instance with a SR-IOV network interface To create an instance with a single root I/O virtualization (SR-IOV) network interface you need to create the required SR-IOV port. Note With API microversion 2.94, if you pass an optional hostname when creating, updating, or rebuilding an instance, you can use Fully Qualified Domain Names (FQDN) when you specify the hostname. When using FQDN, ensure that [api]dhcp_domain configuration option is set to the empty string for the correct FQDN to appear in the hostname field in the metadata API. By default, the hostname is normalized from the display name and all occurrences of "." are removed from the hostname and replaced with "_". Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Retrieve the name or ID of the flavor that has the hardware profile that your instance requires: Note Choose a flavor with sufficient size for the image to successfully boot, otherwise the instance will fail to launch. Tip You can specify the NUMA affinity policy that is applied to your instance for PCI passthrough devices and SR-IOV interfaces, by selecting a flavor that has the policy you require. For more information on the available policies, see Instance PCI NUMA affinity policy in Flavor metadata in the Configuring the Compute service for instance creation guide. If you choose a flavor with a NUMA affinity policy, then the image that you use must have either the same NUMA affinity policy or no NUMA affinity policy. Retrieve the name or ID of the image that has the software profile that your instance requires: If the image that you require is not available, you can download or create a new image. For information about how to create or download cloud images, see Creating RHEL KVM images in Performing storage operations . Tip You can specify the NUMA affinity policy that is applied to your instance for PCI passthrough devices and SR-IOV interfaces, by selecting an image that has the policy you require. For more information on the available policies, see Instance PCI NUMA affinity policy in Flavor metadata in the Configuring the Compute service for instance creation guide. If you choose an image with a NUMA affinity policy, then the flavor that you use must have either the same NUMA affinity policy or no NUMA affinity policy. Retrieve the name or ID of the network that you want to connect your instance to: Create the type of port that you require for your SR-IOV interface: Replace <network> with the name or ID of the network you retrieved in step 3. Replace <vnic_type> with the one of the following values: direct : Creates a direct mode SR-IOV virtual function (VF) port. direct-physical : Creates a direct mode SR-IOV physical function (PF) port. macvtap : Creates an indirect mode SR-IOV VF port that uses MacVTap to expose the virtio interface to the instance. Create your instance: Replace <flavor> with the name or ID of the flavor that you retrieved in step 1. Replace <image> with the name or ID of the image that you retrieved in step 2. Replace <port> with the name or ID of the port that you created in step 4. 5.5. Creating an instance with NUMA affinity on the port To create an instance with NUMA affinity on the port you create the port with the required NUMA affinity policy, then specify the port when creating the instance. Note Port NUMA affinity policies have a higher precedence than flavors, images, and PCI NUMA affinity policies. Cloud operators can set a default NUMA affinity policy for each PCI passthrough device. You can use the instance flavor, image, or port to override the default NUMA affinity policy applied to an instance. Note With API microversion 2.94, if you pass an optional hostname when creating, updating, or rebuilding an instance, you can use Fully Qualified Domain Names (FQDN) when you specify the hostname. When using FQDN, ensure that [api]dhcp_domain configuration option is set to the empty string for the correct FQDN to appear in the hostname field in the metadata API. By default, the hostname is normalized from the display name and all occurrences of "." are removed from the hostname and replaced with "_". Prerequisites The port-numa-affinity-policy extension must be enabled in the cloud platform. The service plugin must be configured in the Networking service (neutron). The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Create a port with the NUMA affinity policy that you require: Replace <network> with the name or ID of the tenant network that you want to connect your instance to. Use one of the following options to specify the NUMA affinity policy to apply to the port: --numa-policy-required - NUMA affinity policy required to schedule this port. --numa-policy-preferred - NUMA affinity policy preferred to schedule this port. --numa-policy-legacy - NUMA affinity policy using legacy mode to schedule this port. Create your instance: Replace <flavor> with the name or ID of the flavor that has the hardware profile that you require for your instance. Replace <image> with the name or ID of the image that has the software profile that you require for your instance. Replace <port> with the name or ID of the port that you created in step 1. 5.6. Additional resources Creating a customized instance
[ "openstack flavor list --os-cloud <cloud_name>", "`export OS_CLOUD=<cloud_name>`", "openstack flavor list", "openstack image list", "openstack network list", "openstack server create --flavor <flavor> --image <image> --network <network> --wait myInstanceFromImage", "openstack image list", "openstack volume create --image <image> --size <size_gb> --bootable myBootableVolume", "openstack flavor list", "openstack network list", "openstack server create --flavor <flavor> --volume myBootableVolume --network <network> --wait myInstanceFromVolume", "openstack flavor list", "openstack image list", "openstack network list", "openstack port create --network <network> --vnic-type <vnic_type> mySriovPort", "openstack server create --flavor <flavor> --image <image> --port <port> --wait mySriovInstance", "openstack port create --network <network> [--numa-policy-required | --numa-policy-preferred | --numa-policy-legacy] myNUMAAffinityPort", "openstack server create --flavor <flavor> --image <image> --port <port> --wait myNUMAAffinityInstance" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/assembly_creating-an-instance_osp
Chapter 2. Configuring a private cluster
Chapter 2. Configuring a private cluster After you install an OpenShift Container Platform version 4.16 cluster, you can set some of its core components to be private. 2.1. About private clusters By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your private cluster. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. DNS If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster's own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps , for the Ingress object, and api , for the API server. The *.apps records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster. Ingress Controller Because the default Ingress object is created as public, the load balancer is internet-facing and in the public subnets. The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure. API server By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic. On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster's access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path. On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer. On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster. 2.2. Configuring DNS records to be published in a private zone For all OpenShift Container Platform clusters, whether public or private, DNS records are published in a public zone by default. You can remove the public zone from the cluster DNS configuration to avoid exposing DNS records to the public. You might want to avoid exposing sensitive information, such as internal domain names, internal IP addresses, or the number of clusters at an organization, or you might simply have no need to publish records publicly. If all the clients that should be able to connect to services within the cluster use a private DNS service that has the DNS records from the private zone, then there is no need to have a public DNS record for the cluster. After you deploy a cluster, you can modify its DNS to use only a private zone by modifying the DNS custom resource (CR). Modifying the DNS CR in this way means that any DNS records that are subsequently created are not published to public DNS servers, which keeps knowledge of the DNS records isolated to internal users. This can be done when you configure the cluster to be private, or if you never want DNS records to be publicly resolvable. Alternatively, even in a private cluster, you might keep the public zone for DNS records because it allows clients to resolve DNS names for applications running on that cluster. For example, an organization can have machines that connect to the public internet and then establish VPN connections for certain private IP ranges in order to connect to private IP addresses. The DNS lookups from these machines use the public DNS to determine the private addresses of those services, and then connect to the private addresses over the VPN. Procedure Review the DNS CR for your cluster by running the following command and observing the output: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS CR to remove the public zone by running the following command: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' Example output dns.config.openshift.io/cluster patched The Ingress Operator consults the DNS CR definition when it creates DNS records for IngressController objects. If only private zones are specified, only private records are created. Important Existing DNS records are not modified when you remove the public zone. You must manually delete previously published public DNS records if you no longer want them to be published publicly. Verification Review the DNS CR for your cluster and confirm that the public zone was removed, by running the following command and observing the output: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 2.3. Setting the Ingress Controller to private After you deploy a cluster, you can modify its Ingress Controller to use only a private zone. Procedure Modify the default Ingress Controller to use only an internal endpoint: USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF Example output ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced The public DNS entry is removed, and the private zone entry is updated. 2.4. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. For Azure, delete the api-internal-v4 rule for the public load balancer. For Azure, configure the Ingress Controller endpoint publishing scope to Internal . For more information, see "Configuring the Ingress Controller endpoint publishing scope to Internal". For the Azure public load balancer, if you configure the Ingress Controller endpoint publishing scope to Internal and there are no existing inbound rules in the public load balancer, you must create an outbound rule explicitly to provide outbound traffic for the backend address pool. For more information, see the Microsoft Azure documentation about adding outbound rules. Delete the api.USDclustername.USDyourdomain or api.USDclustername DNS entry in the public zone. AWS clusters: Remove the external load balancers: Important You can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers. If your cluster uses a control plane machine set, delete the lines in the control plane machine set custom resource that configure your public or external load balancer: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. If your cluster does not use a control plane machine set, you must delete the external load balancers from each control plane machine. From your terminal, list the cluster machines by running the following command: USD oc get machine -n openshift-machine-api Example output NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m The control plane machines contain master in the name. Remove the external load balancer from each control plane machine: Edit a control plane machine object to by running the following command: USD oc edit machines -n openshift-machine-api <control_plane_name> 1 1 Specify the name of the control plane machine object to modify. Remove the lines that describe the external load balancer, which are marked in the following example: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. Save your changes and exit the object specification. Repeat this process for each of the control plane machines. Additional resources Configuring the Ingress Controller endpoint publishing scope to Internal 2.5. Configuring a private storage endpoint on Azure You can leverage the Image Registry Operator to use private endpoints on Azure, which enables seamless configuration of private storage accounts when OpenShift Container Platform is deployed on private Azure clusters. This allows you to deploy the image registry without exposing public-facing storage endpoints. Important Do not configure a private storage endpoint on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state. You can configure the Image Registry Operator to use private storage endpoints on Azure in one of two ways: By configuring the Image Registry Operator to discover the VNet and subnet names With user-provided Azure Virtual Network (VNet) and subnet names 2.5.1. Limitations for configuring a private storage endpoint on Azure The following limitations apply when configuring a private storage endpoint on Azure: When configuring the Image Registry Operator to use a private storage endpoint, public network access to the storage account is disabled. Consequently, pulling images from the registry outside of OpenShift Container Platform only works by setting disableRedirect: true in the registry Operator configuration. With redirect enabled, the registry redirects the client to pull images directly from the storage account, which will no longer work due to disabled public network access. For more information, see "Disabling redirect when using a private storage endpoint on Azure". This operation cannot be undone by the Image Registry Operator. 2.5.2. Configuring a private storage endpoint on Azure by enabling the Image Registry Operator to discover VNet and subnet names The following procedure shows you how to set up a private storage endpoint on Azure by configuring the Image Registry Operator to discover VNet and subnet names. Prerequisites You have configured the image registry to run on Azure. Your network has been set up using the Installer Provisioned Infrastructure installation method. For users with a custom network setup, see "Configuring a private storage endpoint on Azure with user-provided VNet and subnet names". Procedure Edit the Image Registry Operator config object and set networkAccess.type to Internal : USD oc edit configs.imageregistry/cluster # ... spec: # ... storage: azure: # ... networkAccess: type: Internal # ... Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes. USD oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w Optional: If the registry is exposed by a route, and you are configuring your storage account to be private, you must disable redirect if you want pulls external to the cluster to continue to work. Enter the following command to disable redirect on the Image Operator configuration: USD oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}' Note When redirect is enabled, pulling images from outside of the cluster will not work. Verification Fetch the registry service name by running the following command: USD oc registry info --internal=true Example output image-registry.openshift-image-registry.svc:5000 Enter debug mode by running the following command: USD oc debug node/<node_name> Run the suggested chroot command. For example: USD chroot /host Enter the following command to log in to your container registry: USD podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000 Example output Login Succeeded! Enter the following command to verify that you can pull an image from the registry: USD podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools Example output Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools... Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 2.5.3. Configuring a private storage endpoint on Azure with user-provided VNet and subnet names Use the following procedure to configure a storage account that has public network access disabled and is exposed behind a private storage endpoint on Azure. Prerequisites You have configured the image registry to run on Azure. You must know the VNet and subnet names used for your Azure environment. If your network was configured in a separate resource group in Azure, you must also know its name. Procedure Edit the Image Registry Operator config object and configure the private endpoint using your VNet and subnet names: USD oc edit configs.imageregistry/cluster # ... spec: # ... storage: azure: # ... networkAccess: type: Internal internal: subnetName: <subnet_name> vnetName: <vnet_name> networkResourceGroupName: <network_resource_group_name> # ... Optional: Enter the following command to confirm that the Operator has completed provisioning. This might take a few minutes. USD oc get configs.imageregistry/cluster -o=jsonpath="{.spec.storage.azure.privateEndpointName}" -w Note When redirect is enabled, pulling images from outside of the cluster will not work. Verification Fetch the registry service name by running the following command: USD oc registry info --internal=true Example output image-registry.openshift-image-registry.svc:5000 Enter debug mode by running the following command: USD oc debug node/<node_name> Run the suggested chroot command. For example: USD chroot /host Enter the following command to log in to your container registry: USD podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000 Example output Login Succeeded! Enter the following command to verify that you can pull an image from the registry: USD podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools Example output Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools... Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 2.5.4. Optional: Disabling redirect when using a private storage endpoint on Azure By default, redirect is enabled when using the image registry. Redirect allows off-loading of traffic from the registry pods into the object storage, which makes pull faster. When redirect is enabled and the storage account is private, users from outside of the cluster are unable to pull images from the registry. In some cases, users might want to disable redirect so that users from outside of the cluster can pull images from the registry. Use the following procedure to disable redirect. Prerequisites You have configured the image registry to run on Azure. You have configured a route. Procedure Enter the following command to disable redirect on the image registry configuration: USD oc patch configs.imageregistry cluster --type=merge -p '{"spec":{"disableRedirect": true}}' Verification Fetch the registry service name by running the following command: USD oc registry info Example output default-route-openshift-image-registry.<cluster_dns> Enter the following command to log in to your container registry: USD podman login --tls-verify=false -u unused -p USD(oc whoami -t) default-route-openshift-image-registry.<cluster_dns> Example output Login Succeeded! Enter the following command to verify that you can pull an image from the registry: USD podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns> /openshift/tools Example output Trying to pull default-route-openshift-image-registry.<cluster_dns>/openshift/tools... Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}'", "dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <control_plane_name> 1", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc edit configs.imageregistry/cluster", "spec: # storage: azure: # networkAccess: type: Internal", "oc get configs.imageregistry/cluster -o=jsonpath=\"{.spec.storage.azure.privateEndpointName}\" -w", "oc patch configs.imageregistry cluster --type=merge -p '{\"spec\":{\"disableRedirect\": true}}'", "oc registry info --internal=true", "image-registry.openshift-image-registry.svc:5000", "oc debug node/<node_name>", "chroot /host", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools", "Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc edit configs.imageregistry/cluster", "spec: # storage: azure: # networkAccess: type: Internal internal: subnetName: <subnet_name> vnetName: <vnet_name> networkResourceGroupName: <network_resource_group_name>", "oc get configs.imageregistry/cluster -o=jsonpath=\"{.spec.storage.azure.privateEndpointName}\" -w", "oc registry info --internal=true", "image-registry.openshift-image-registry.svc:5000", "oc debug node/<node_name>", "chroot /host", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools", "Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc patch configs.imageregistry cluster --type=merge -p '{\"spec\":{\"disableRedirect\": true}}'", "oc registry info", "default-route-openshift-image-registry.<cluster_dns>", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>", "Login Succeeded!", "podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns> /openshift/tools", "Trying to pull default-route-openshift-image-registry.<cluster_dns>/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/configuring-private-cluster
Chapter 4. Installing a cluster quickly on Azure
Chapter 4. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.15, you can install a cluster on Microsoft Azure that uses the default configuration options. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. You have the application ID and password of a service principal. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Specify the following Azure parameter values for your subscription and service principal: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. azure service principal client id : Enter its application ID. azure service principal client secret : Enter its password. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure/installing-azure-default
Chapter 1. Recommendations for Large Deployments
Chapter 1. Recommendations for Large Deployments Use the following undercloud and overcloud recommendations, specifications, and configuration for deploying a large Red Hat OpenStack Platform (RHOSP) environment. RHOSP 16.2 deployments that include more than 100 overcloud nodes are considered large environments. Red Hat has tested and validated optimum performance on a large scale environment of 700 overcloud nodes using RHOSP 16.2 without using minion. For DCN-based deployments, the number of nodes from the central and edge sites can be very high. For recommendations about DCN deployments, contact Red Hat Global Support Services.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/recommendations_for_large_deployments/assembly-recommendations-for-large-deployments_recommendations-large-deployments
Chapter 6. Uninstalling power monitoring
Chapter 6. Uninstalling power monitoring Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can uninstall power monitoring by deleting the Kepler instance and then the Power monitoring Operator in the OpenShift Container Platform web console. 6.1. Deleting Kepler You can delete Kepler by removing the Kepler instance of the Kepler custom resource definition (CRD) from the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and go to the Kepler tab. Locate the Kepler instance entry in the list. Click for this entry and select Delete Kepler . In the Delete Kepler? dialog, click Delete to delete the Kepler instance. 6.2. Uninstalling the Power monitoring Operator If you installed the Power monitoring Operator by using OperatorHub, you can uninstall it from the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure Delete the Kepler instance. Warning Ensure that you have deleted the Kepler instance before uninstalling the Power monitoring Operator. Go to Operators Installed Operators . Locate the Power monitoring for Red Hat OpenShift entry in the list. Click for this entry and select Uninstall Operator . In the Uninstall Operator? dialog, click Uninstall to uninstall the Power monitoring Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/power_monitoring/uninstalling-power-monitoring
Chapter 11. Using PTP hardware
Chapter 11. Using PTP hardware Important Precision Time Protocol (PTP) hardware with single NIC configured as boundary clock is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 11.1. About PTP hardware OpenShift Container Platform allows you use PTP hardware on your nodes. You can configure linuxptp services on nodes that have PTP-capable hardware. Note The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure. You can use the OpenShift Container Platform console or oc CLI to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features: Discovery of the PTP-capable devices in the cluster. Management of the configuration of linuxptp services. Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator cloud-event-proxy sidecar. 11.2. About PTP The Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP). The linuxptp package includes the ptp4l and phc2sys programs for clock synchronization. ptp4l implements the PTP boundary clock and ordinary clock. ptp4l synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. phc2sys is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface controller (NIC). 11.2.1. Elements of a PTP domain PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The following type of clocks can be included in configurations: Grandmaster clock The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. The grandmaster clock writes time stamps and responds to time requests from other clocks. Ordinary clock The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps. Boundary clock The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock. 11.2.2. Advantages of PTP over NTP One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled. Hardware-based PTP provides optimal accuracy, since the NIC can time stamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system. Important Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service ( chronyd ) using a MachineConfig custom resource. For more information, see Disabling chrony time service . 11.3. Installing the PTP Operator using the CLI As a cluster administrator, you can install the Operator by using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To create a namespace for the PTP Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: "true" EOF To create an Operator group for the Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp EOF Subscribe to the PTP Operator. Run the following command to set the OpenShift Container Platform major and minor version as an environment variable, which is used as the channel value in the step. USD OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | \ grep -o '[0-9]*[.][0-9]*' | head -1) To create a subscription for the PTP Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: "USD{OC_VERSION}" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF To verify that the Operator is installed, enter the following command: USD oc get csv -n openshift-ptp \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase ptp-operator.4.4.0-202006160135 Succeeded 11.4. Installing the PTP Operator using the web console As a cluster administrator, you can install the PTP Operator using the web console. Note You have to create the namespace and Operator group as mentioned in the section. Procedure Install the PTP Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose PTP Operator from the list of available Operators, and then click Install . On the Install Operator page, under A specific namespace on the cluster select openshift-ptp . Then, click Install . Optional: Verify that the PTP Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the openshift-ptp project. 11.5. Automated discovery of PTP network devices The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform. The PTP Operator searchs your cluster for PTP-capable network devices on each node. It creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP device. One CR is created for each node and shares the same name as the node. The .status.devices list provides information about the PTP devices on a node. The following is an example of a NodePtpDevice CR created by the PTP Operator: apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: "2019-11-15T08:57:11Z" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp 2 resourceVersion: "487462" selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0 uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f spec: {} status: devices: 3 - name: eno1 - name: eno2 - name: ens787f0 - name: ens787f1 - name: ens801f0 - name: ens801f1 - name: ens802f0 - name: ens802f1 - name: ens803 1 The value for the name parameter is the same as the name of the node. 2 The CR is created in openshift-ptp namespace by PTP Operator. 3 The devices collection includes a list of the PTP capable devices discovered by the Operator on the node. To return a complete list of PTP capable network devices in your cluster, run the following command: USD oc get NodePtpDevice -n openshift-ptp -o yaml 11.6. Configuring linuxptp services as ordinary clock The PTP Operator adds the PtpConfig.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure the linuxptp services ( ptp4l , phc2sys ) by creating a PtpConfig custom resource (CR) object. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the following PtpConfig CR, and then save the YAML in the ordinary-clock-ptp-config.yaml file. apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock-ptp-config 1 namespace: openshift-ptp spec: profile: 2 - name: "profile1" 3 interface: "ens787f1" 4 ptp4lOpts: "-s -2" 5 phc2sysOpts: "-a -r" 6 ptp4lConf: "" 7 ptpSchedulingPolicy: SCHED_OTHER 8 ptpSchedulingPriority: 10 9 recommend: 10 - profile: "profile1" 11 priority: 10 12 match: 13 - nodeLabel: "node-role.kubernetes.io/worker" 14 nodeName: "compute-0.example.com" 15 1 The name of the PtpConfig CR. 2 Specify an array of one or more profile objects. 3 Specify the name of a profile object that uniquely identifies a profile object. 4 Specify the network interface name to use by the ptp4l service, for example ens787f1 . 5 Specify system config options for the ptp4l service, for example -2 to select the IEEE 802.3 network transport. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. 6 Specify system config options for the phc2sys service, for example -a -r . If this field is empty the PTP Operator does not start the phc2sys service. 7 Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. 8 Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER . Use SCHED_FIFO on systems that support FIFO scheduling. 9 Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO . The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER . 10 Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes. 11 Specify the profile object name defined in the profile section. 12 Specify the priority with an integer value between 0 and 99 . A larger number gets lower priority, so a priority of 99 is lower than a priority of 10 . If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node. 13 Specify match rules with nodeLabel or nodeName . 14 Specify nodeLabel with the key of node.Labels from the node object by using the oc get nodes --show-labels command. 15 Specify nodeName with node.Name from the node object by using the oc get nodes command. Create the CR by running the following command: USD oc create -f ordinary-clock-ptp-config.yaml Verification steps Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container Example output I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ Additional resources For more information about FIFO priority scheduling on PTP hardware, see Configuring FIFO priority scheduling for PTP hardware . 11.7. Configuring linuxptp services as boundary clock The PTP Operator adds the PtpConfig.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure the linuxptp services ( ptp4l , phc2sys ) by creating a PtpConfig custom resource (CR) object. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the following PtpConfig CR, and then save the YAML in the boundary-clock-ptp-config.yaml file. apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config 1 namespace: openshift-ptp spec: profile: 2 - name: "profile1" 3 interface: "" 4 ptp4lOpts: "-2" 5 ptp4lConf: | 6 [ens1f0] 7 masterOnly 0 [ens1f3] 8 masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 #slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 10 #was 1 (default !) unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport UDPv4 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 9 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 phc2sysOpts: "-a -r" 10 ptpSchedulingPolicy: SCHED_OTHER 11 ptpSchedulingPriority: 10 12 recommend: 13 - profile: "profile1" 14 priority: 10 15 match: 16 - nodeLabel: "node-role.kubernetes.io/worker" 17 nodeName: "compute-0.example.com" 18 1 The name of the PtpConfig CR. 2 Specify an array of one or more profile objects. 3 Specify the name of a profile object which uniquely identifies a profile object. 4 This field should remain empty for boundary clock. 5 Specify system config options for the ptp4l service, for example -2 . The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. 6 Specify the needed configuration to start ptp4l as boundary clock. For example, ens1f0 synchronizes from a grandmaster clock and ens1f3 synchronizes connected devices. 7 The interface name to synchronize from. 8 The interface to synchronize devices connected to the interface. 9 For Intel Columbiaville 800 Series NICs, ensure boundary_clock_jbod is set to 0 . For Intel Fortville X710 Series NICs, ensure boundary_clock_jbod is set to 1 . 10 Specify system config options for the phc2sys service, for example -a -r . If this field is empty the PTP Operator does not start the phc2sys service. 11 Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER . Use SCHED_FIFO on systems that support FIFO scheduling. 12 Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO . The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER . 13 Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes. 14 Specify the profile object name defined in the profile section. 15 Specify the priority with an integer value between 0 and 99 . A larger number gets lower priority, so a priority of 99 is lower than a priority of 10 . If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node. 16 Specify match rules with nodeLabel or nodeName . 17 Specify nodeLabel with the key of node.Labels from the node object by using the oc get nodes --show-labels command. 18 Specify nodeName with node.Name from the node object by using the oc get nodes command. Create the CR by running the following command: USD oc create -f boundary-clock-ptp-config.yaml Verification steps Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container Example output I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ Additional resources For more information about FIFO priority scheduling on PTP hardware, see Configuring FIFO priority scheduling for PTP hardware . 11.8. Configuring FIFO priority scheduling for PTP hardware In telco or other deployment configurations that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER policy. Under high load, these threads might not get the scheduling latency they require for error-free operation. To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp services to allow threads to run with a SCHED_FIFO policy. If SCHED_FIFO is set for a PtpConfig CR, then ptp4l and phc2sys will run in the parent container under chrt with a priority set by the ptpSchedulingPriority field of the PtpConfig CR. Note Setting ptpSchedulingPolicy is optional, and is only required if you are experiencing latency errors. Procedure Edit the PtpConfig CR profile: USD oc edit PtpConfig -n openshift-ptp Change the ptpSchedulingPolicy and ptpSchedulingPriority fields: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp ... spec: profile: - name: "profile1" ... ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2 1 Scheduling policy for ptp4l and phc2sys processes. Use SCHED_FIFO on systems that support FIFO scheduling. 2 Required. Sets the integer value 1-65 used to configure FIFO priority for ptp4l and phc2sys processes. Save and exit to apply the changes to the PtpConfig CR. Verification Get the name of the linuxptp-daemon pod and corresponding node where the PtpConfig CR has been applied: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com Check that the ptp4l process is running with the updated chrt FIFO priority: USD oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt Example output I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m 11.9. Troubleshooting common PTP Operator issues Troubleshoot common problems with the PTP Operator by performing the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator on a bare-metal cluster with hosts that support PTP. Procedure Check the Operator and operands are successfully deployed in the cluster for the configured nodes. USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com Note When the PTP fast event bus is enabled, the number of ready linuxptp-daemon pods is 3/3 . If the PTP fast event bus is not enabled, 2/2 is displayed. Check that supported hardware is found in the cluster. USD oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io Example output NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d Check the available PTP network interfaces for a node: USD oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml where: <node_name> Specifies the node you want to query, for example, compute-0.example.com . Example output apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: "2021-09-14T16:52:33Z" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: "177400" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1 Check that the PTP interface is successfully synchronized to the primary clock by accessing the linuxptp-daemon pod for the corresponding node. Get the name of the linuxptp-daemon pod and corresponding node you want to troubleshoot by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com Remote shell into the required linuxptp-daemon container: USD oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container> where: <linux_daemon_container> is the container you want to diagnose, for example linuxptp-daemon-lmvgn . In the remote shell connection to the linuxptp-daemon container, use the PTP Management Client ( pmc ) tool to diagnose the network interface. Run the following pmc command to check the sync status of the PTP device, for example ptp4l . # pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET' Example output when the node is successfully synced to the primary clock sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2 11.10. PTP hardware fast event notifications framework Important PTP events with ordinary clock is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 11.10.1. About PTP and clock synchronization error events Cloud native applications such as virtual RAN require access to notifications about hardware timing events that are critical to the functioning of the overall network. Fast event notifications are early warning signals about impending and real-time Precision Time Protocol (PTP) clock synchronization events. PTP clock synchronization errors can negatively affect the performance and reliability of your low latency application, for example, a vRAN application running in a distributed unit (DU). Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU. Event notifications are available to RAN applications running on the same DU node. A publish/subscribe REST API passes events notifications to the messaging bus. Publish/subscribe messaging, or pub/sub messaging, is an asynchronous service to service communication architecture where any message published to a topic is immediately received by all the subscribers to the topic. Fast event notifications are generated by the PTP Operator in OpenShift Container Platform for every PTP-capable network interface. The events are made available using a cloud-event-proxy sidecar container over an Advanced Message Queuing Protocol (AMQP) message bus. The AMQP message bus is provided by the AMQ Interconnect Operator. Note PTP fast event notifications are available only for network interfaces configured to use PTP ordinary clocks. 11.10.2. About the PTP fast event notifications framework You can subscribe Distributed unit (DU) applications to Precision Time Protocol (PTP) fast events notifications that are generated by OpenShift Container Platform with the PTP Operator and cloud-event-proxy sidecar container. You enable the cloud-event-proxy sidecar container by setting the enableEventPublisher field to true in the ptpOperatorConfig custom resource (CR) and specifying a transportHost address. PTP fast events use an Advanced Message Queuing Protocol (AMQP) event notification bus provided by the AMQ Interconnect Operator. AMQ Interconnect is a component of Red Hat AMQ, a messaging router that provides flexible routing of messages between any AMQP-enabled endpoints. The cloud-event-proxy sidecar container can access the same resources as the primary vRAN application without using any of the resources of the primary application and with no significant latency. The fast events notifications framework uses a REST API for communication and is based on the O-RAN REST API specification. The framework consists of a publisher, subscriber, and an AMQ messaging bus to handle communications between the publisher and subscriber applications. The cloud-event-proxy sidecar is a utility container that runs in a pod that is loosely coupled to the main DU application container on the DU node. It provides an event publishing framework that allows you to subscribe DU applications to published PTP events. DU applications run the cloud-event-proxy container in a sidecar pattern to subscribe to PTP events. The following workflow describes how a DU application uses PTP fast events: DU application requests a subscription : The DU sends an API request to the cloud-event-proxy sidecar to create a PTP events subscription. The cloud-event-proxy sidecar creates a subscription resource. cloud-event-proxy sidecar creates the subscription : The event resource is persisted by the cloud-event-proxy sidecar. The cloud-event-proxy sidecar container sends an acknowledgment with an ID and URL location to access the stored subscription resource. The sidecar creates an AMQ messaging listener protocol for the resource specified in the subscription. DU application receives the PTP event notification : The cloud-event-proxy sidecar container listens to the address specified in the resource qualifier. The DU events consumer processes the message and passes it to the return URL specified in the subscription. cloud-event-proxy sidecar validates the PTP event and posts it to the DU application : The cloud-event-proxy sidecar receives the event, unwraps the cloud events object to retrieve the data, and fetches the return URL to post the event back to the DU consumer application. DU application uses the PTP event : The DU application events consumer receives and processes the PTP event. 11.10.3. Installing the AMQ messaging bus To pass PTP fast event notifications between publisher and subscriber on a node, you must install and configure an AMQ messaging bus to run locally on the node. You do this by installing the AMQ Interconnect Operator for use in the cluster. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Install the AMQ Interconnect Operator to its own amq-interconnect namespace. See Adding the Red Hat Integration - AMQ Interconnect Operator . Verification Check that the AMQ Interconnect Operator is available and the required pods are running: USD oc get pods -n amq-interconnect Example output NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h Check that the required linuxptp-daemon PTP event producer pods are running in the openshift-ptp namespace. USD oc get pods -n openshift-ptp Example output NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 12h linuxptp-daemon-k8n88 3/3 Running 0 12h 11.10.4. Configuring the PTP fast event notifications publisher To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator and AMQ Interconnect Operator. Procedure Modify the spec.ptpEventConfig field of the PtpOperatorConfig resource and set appropriate values by running the following command: USD oc edit PtpOperatorConfig default -n openshift-ptp ... spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" ptpEventConfig: enableEventPublisher: true 1 transportHost: amqp://<instance_name>.<namespace>.svc.cluster.local 2 1 Set enableEventPublisher to true to enable PTP fast event notifications. 2 Set transportHost to the AMQ router you configured where <instance_name> and <namespace> correspond to the AMQ Interconnect router instance name and namespace, for example, amqp://amq-interconnect.amq-interconnect.svc.cluster.local Create a PtpConfig custom resource for the PTP enabled interface, and set the required values for ptpClockThreshold , for example: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: example-ptpconfig namespace: openshift-ptp spec: profile: - name: "profile1" interface: "enp5s0f0" ptp4lOpts: "-2 -s --summary_interval -4" 1 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 2 ptp4lConf: "" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100 1 Append --summary_interval -4 to use PTP fast events. 2 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 3 Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. 4 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . 11.10.5. Subscribing DU applications to PTP events REST API reference Use the PTP event notifications REST API to subscribe a distributed unit (DU) application to the PTP events that are generated on the parent node. Subscribe applications to PTP events by using the resource address /cluster/node/<node_name>/ptp , where <node_name> is the cluster node running the DU application. Deploy your cloud-event-consumer DU application container and cloud-event-proxy sidecar container in a separate DU application pod. The cloud-event-consumer DU application subscribes to the cloud-event-proxy container in the application pod. Use the following API endpoints to subscribe the cloud-event-consumer DU application to PTP events posted by the cloud-event-proxy container at http://localhost:8089/api/cloudNotifications/v1/ in the DU application pod: /api/cloudNotifications/v1/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions /api/cloudNotifications/v1/subscriptions/<subscription_id> GET : Returns details for the specified subscription ID api/cloudNotifications/v1/subscriptions/status/<subscription_id> PUT : Creates a new status ping request for the specified subscription ID /api/cloudNotifications/v1/health GET : Returns the health status of cloudNotifications API Note 9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your DU application as required. 11.10.5.1. api/cloudNotifications/v1/subscriptions 11.10.5.1.1. HTTP method GET api/cloudNotifications/v1/subscriptions 11.10.5.1.1.1. Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "id": "75b1ad8f-c807-4c23-acf5-56f4b7ee3826", "endpointUri": "http://localhost:9089/event", "uriLocation": "http://localhost:8089/api/cloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826", "resource": "/cluster/node/compute-1.example.com/ptp" } ] 11.10.5.1.2. HTTP method POST api/cloudNotifications/v1/subscriptions 11.10.5.1.2.1. Description Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. Table 11.1. Query parameters Parameter Type subscription data Example payload { "uriLocation": "http://localhost:8089/api/cloudNotifications/v1/subscriptions", "resource": "/cluster/node/compute-1.example.com/ptp" } 11.10.5.2. api/cloudNotifications/v1/subscriptions/<subscription_id> 11.10.5.2.1. HTTP method GET api/cloudNotifications/v1/subscriptions/<subscription_id> 11.10.5.2.1.1. Description Returns details for the subscription with ID <subscription_id> Table 11.2. Query parameters Parameter Type <subscription_id> string Example API response { "id":"48210fb3-45be-4ce0-aa9b-41a0e58730ab", "endpointUri": "http://localhost:9089/event", "uriLocation":"http://localhost:8089/api/cloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab", "resource":"/cluster/node/compute-1.example.com/ptp" } 11.10.5.3. api/cloudNotifications/v1/subscriptions/status/<subscription_id> 11.10.5.3.1. HTTP method PUT api/cloudNotifications/v1/subscriptions/status/<subscription_id> 11.10.5.3.1.1. Description Creates a new status ping request for subscription with ID <subscription_id> . If a subscription is present, the status request is successful and a 202 Accepted status code is returned. Table 11.3. Query parameters Parameter Type <subscription_id> string Example API response {"status":"ping sent"} 11.10.5.4. api/cloudNotifications/v1/health/ 11.10.5.4.1. HTTP method GET api/cloudNotifications/v1/health/ 11.10.5.4.1.1. Description Returns the health status for the cloudNotifications REST API. Example API response OK 11.10.6. Monitoring PTP fast event metrics using the CLI You can monitor fast events bus metrics directly from cloud-event-proxy containers using the oc CLI. Note PTP fast event notification metrics are also available in the OpenShift Container Platform web console. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install and configure the PTP Operator. Procedure Get the list of active linuxptp-daemon pods. USD oc get pods -n openshift-ptp Example output NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h Access the metrics for the required cloud-event-proxy container by running the following command: USD oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics where: <linuxptp-daemon> Specifies the pod you want to query, for example, linuxptp-daemon-2t78p . Example output # HELP cne_amqp_events_published Metric to get number of events published by the transport # TYPE cne_amqp_events_published gauge cne_amqp_events_published{address="/cluster/node/compute-1.example.com/ptp/status",status="success"} 1041 # HELP cne_amqp_events_received Metric to get number of events received by the transport # TYPE cne_amqp_events_received gauge cne_amqp_events_received{address="/cluster/node/compute-1.example.com/ptp",status="success"} 1019 # HELP cne_amqp_receiver Metric to get number of receiver created # TYPE cne_amqp_receiver gauge cne_amqp_receiver{address="/cluster/node/mock",status="active"} 1 cne_amqp_receiver{address="/cluster/node/compute-1.example.com/ptp",status="active"} 1 cne_amqp_receiver{address="/cluster/node/compute-1.example.com/redfish/event",status="active"} ... 11.10.7. Monitoring PTP fast event metrics in the web console You can monitor PTP fast event metrics in the OpenShift Container Platform web console by using the pre-configured and self-updating Prometheus monitoring stack. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure Enter the following command to return the list of available PTP metrics from the cloud-event-proxy sidecar container: USD oc exec -it <linuxptp_daemon_pod> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics where: <linuxptp_daemon_pod> Specifies the pod you want to query, for example, linuxptp-daemon-2t78p . Copy the name of the PTP metric you want to query from the list of returned metrics, for example, cne_amqp_events_received . In the OpenShift Container Platform web console, click Observe Metrics . Paste the PTP metric into the Expression field, and click Run queries . Additional resources Managing metrics
[ "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: \"true\" EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp EOF", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"USD{OC_VERSION}\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase ptp-operator.4.4.0-202006160135 Succeeded", "apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2019-11-15T08:57:11Z\" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp 2 resourceVersion: \"487462\" selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0 uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f spec: {} status: devices: 3 - name: eno1 - name: eno2 - name: ens787f0 - name: ens787f1 - name: ens801f0 - name: ens801f1 - name: ens802f0 - name: ens802f1 - name: ens803", "oc get NodePtpDevice -n openshift-ptp -o yaml", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock-ptp-config 1 namespace: openshift-ptp spec: profile: 2 - name: \"profile1\" 3 interface: \"ens787f1\" 4 ptp4lOpts: \"-s -2\" 5 phc2sysOpts: \"-a -r\" 6 ptp4lConf: \"\" 7 ptpSchedulingPolicy: SCHED_OTHER 8 ptpSchedulingPriority: 10 9 recommend: 10 - profile: \"profile1\" 11 priority: 10 12 match: 13 - nodeLabel: \"node-role.kubernetes.io/worker\" 14 nodeName: \"compute-0.example.com\" 15", "oc create -f ordinary-clock-ptp-config.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com", "oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container", "I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config 1 namespace: openshift-ptp spec: profile: 2 - name: \"profile1\" 3 interface: \"\" 4 ptp4lOpts: \"-2\" 5 ptp4lConf: | 6 [ens1f0] 7 masterOnly 0 [ens1f3] 8 masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 #slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 10 #was 1 (default !) unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport UDPv4 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 9 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 phc2sysOpts: \"-a -r\" 10 ptpSchedulingPolicy: SCHED_OTHER 11 ptpSchedulingPriority: 10 12 recommend: 13 - profile: \"profile1\" 14 priority: 10 15 match: 16 - nodeLabel: \"node-role.kubernetes.io/worker\" 17 nodeName: \"compute-0.example.com\" 18", "oc create -f boundary-clock-ptp-config.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com", "oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container", "I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------", "oc edit PtpConfig -n openshift-ptp", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com", "oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt", "I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com", "oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io", "NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d", "oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml", "apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2021-09-14T16:52:33Z\" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: \"177400\" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com", "oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>", "pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'", "sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2", "oc get pods -n amq-interconnect", "NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h", "oc get pods -n openshift-ptp", "NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 12h linuxptp-daemon-k8n88 3/3 Running 0 12h", "oc edit PtpOperatorConfig default -n openshift-ptp", "spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: enableEventPublisher: true 1 transportHost: amqp://<instance_name>.<namespace>.svc.cluster.local 2", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: example-ptpconfig namespace: openshift-ptp spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100", "[ { \"id\": \"75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\": \"http://localhost:8089/api/cloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/cloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" }", "{ \"id\":\"48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\":\"http://localhost:8089/api/cloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"resource\":\"/cluster/node/compute-1.example.com/ptp\" }", "{\"status\":\"ping sent\"}", "OK", "oc get pods -n openshift-ptp", "NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h", "oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics", "HELP cne_amqp_events_published Metric to get number of events published by the transport TYPE cne_amqp_events_published gauge cne_amqp_events_published{address=\"/cluster/node/compute-1.example.com/ptp/status\",status=\"success\"} 1041 HELP cne_amqp_events_received Metric to get number of events received by the transport TYPE cne_amqp_events_received gauge cne_amqp_events_received{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 1019 HELP cne_amqp_receiver Metric to get number of receiver created TYPE cne_amqp_receiver gauge cne_amqp_receiver{address=\"/cluster/node/mock\",status=\"active\"} 1 cne_amqp_receiver{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 1 cne_amqp_receiver{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"}", "oc exec -it <linuxptp_daemon_pod> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/using-ptp
Chapter 3. Managing user accounts using the IdM Web UI
Chapter 3. Managing user accounts using the IdM Web UI Identity Management (IdM) provides several stages that can help you to manage various user life cycle situations: Creating a user account Creating a stage user account before an employee starts their career in your company and be prepared in advance for the day when the employee appears in the office and want to activate the account. You can omit this step and create the active user account directly. The procedure is similar to creating a stage user account. Activating a user account Activating the account the first working day of the employee. Disabling a user account If the user go to a parental leave for couple of months, you will need to disable the account temporarily . Enabling a user account When the user returns, you will need to re-enable the account . Preserving a user account If the user wants to leave the company, you will need to delete the account with a possibility to restore it because people can return to the company after some time. Restoring a user account Two years later, the user is back and you need to restore the preserved account . Deleting a user account If the employee is dismissed, delete the account without a backup. 3.1. User life cycle Identity Management (IdM) supports three user account states: Stage users are not allowed to authenticate. This is an initial state. Some of the user account properties required for active users cannot be set, for example, group membership. Active users are allowed to authenticate. All required user account properties must be set in this state. Preserved users are former active users that are considered inactive and cannot authenticate to IdM. Preserved users retain most of the account properties they had as active users, but they are not part of any user groups. You can delete user entries permanently from the IdM database. Important Deleted user accounts cannot be restored. When you delete a user account, all the information associated with the account is permanently lost. A new administrator can only be created by a user with administrator rights, such as the default admin user. If you accidentally delete all administrator accounts, the Directory Manager must create a new administrator manually in the Directory Server. Warning Do not delete the admin user. As admin is a pre-defined user required by IdM, this operation causes problems with certain commands. If you want to define and use an alternative admin user, disable the pre-defined admin user with ipa user-disable admin after you granted admin permissions to at least one different user. Warning Do not add local users to IdM. The Name Service Switch (NSS) always resolves IdM users and groups before resolving local users and groups. This means that, for example, IdM group membership does not work for local users. 3.2. Adding users in the Web UI Usually, you need to create a new user account before a new employee starts to work. Such a stage account is not accessible and you need to activate it later. Note Alternatively, you can create an active user account directly. For adding active user, follow the procedure below and add the user account in the Active users tab. Prerequisites Administrator privileges for managing IdM or User Administrator role. Procedure Log in to the IdM Web UI. Go to Users Stage Users tab. Alternatively, you can add the user account in the Users Active users , however, you cannot add user groups to the account. Click the + Add icon. In the Add stage user dialog box, enter First name and Last name of the new user. Optional: In the User login field, add a login name. If you leave it empty, the IdM server creates the login name in the following pattern: The first letter of the first name and the surname. The whole login name can have up to 32 characters. Optional: In the GID drop down menu, select groups in which the user should be included. Optional: In the Password and Verify password fields, enter your password and confirm it, ensuring they both match. Click on the Add button. At this point, you can see the user account in the Stage Users table. Note If you click on the user name, you can edit advanced settings, such as adding a phone number, address, or occupation. Warning IdM automatically assigns a unique user ID (UID) to new user accounts. You can assign a UID manually, or even modify an already existing UID. However, the server does not validate whether the new UID number is unique. Consequently, multiple user entries might have the same UID number assigned. A similar problem can occur with user private group IDs (GIDs) if you assign GIDs to user accounts manually. You can use the ipa user-find --uid=<uid> or ipa user-find --gidnumber=<gidnumber> commands on the IdM CLI to check if you have multiple user entries with the same ID. Red Hat recommends you do not have multiple entries with the same UIDs or GIDs. If you have objects with duplicate IDs, security identifiers (SIDs) are not generated correctly. SIDs are crucial for trusts between IdM and Active Directory and for Kerberos authentication to work correctly. Additional resources Strengthening Kerberos security with PAC information Are user/group collisions supported in Red Hat Enterprise Linux? (Red Hat Knowledgebase) Users without SIDs cannot log in to IdM after an upgrade 3.3. Activating stage users in the IdM Web UI You must follow this procedure to activate a stage user account, before the user can log in to IdM and before the user can be added to an IdM group. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. At least one staged user account in IdM. Procedure Log in to the IdM Web UI. Go to Users Stage users tab. Click the check-box of the user account you want to activate. Click on the Activate button. On the Confirmation dialog box, click OK . If the activation is successful, the IdM Web UI displays a green confirmation that the user has been activated and the user account has been moved to Active users . The account is active and the user can authenticate to the IdM domain and IdM Web UI. The user is prompted to change their password on the first login. Note At this stage, you can add the active user account to user groups. 3.4. Disabling user accounts in the Web UI You can disable active user accounts. Disabling a user account deactivates the account, therefore, user accounts cannot be used to authenticate and using IdM services, such as Kerberos, or perform any tasks. Disabled user accounts still exist within IdM and all of the associated information remains unchanged. Unlike preserved user accounts, disabled user accounts remain in the active state and can be a member of user groups. Note After disabling a user account, any existing connections remain valid until the user's Kerberos TGT and other tickets expire. After the ticket expires, the user will not be able to renew it. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. Go to Users Active users tab. Click the check-box of the user accounts you want to disable. Click on the Disable button. In the Confirmation dialog box, click on the OK button. If the disabling procedure has been successful, you can verify in the Status column in the Active users table. 3.5. Enabling user accounts in the Web UI With IdM you can enable disabled active user accounts. Enabling a user account activates the disabled account. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. Go to Users Active users tab. Click the check-box of the user accounts you want to enable. Click on the Enable button. In the Confirmation dialog box, click on the OK button. If the change has been successful, you can verify in the Status column in the Active users table. 3.6. Preserving active users in the IdM Web UI Preserving user accounts enables you to remove accounts from the Active users tab, yet keeping these accounts in IdM. Preserve the user account if the employee leaves the company. If you want to disable user accounts for a couple of weeks or months (parental leave, for example), disable the account. For details, see Disabling user accounts in the Web UI . The preserved accounts are not active and users cannot use them to access your internal network, however, the account stays in the database with all the data. You can move the restored accounts back to the active mode. Note The list of users in the preserved state can provide a history of past user accounts. Prerequisites Administrator privileges for managing the IdM (Identity Management) Web UI or User Administrator role. Procedure Log in to the IdM Web UI. Go to Users Active users tab. Click the check-box of the user accounts you want to preserve. Click on the Delete button. In the Remove users dialog box, switch the Delete mode radio button to preserve . Click on the Delete button. As a result, the user account is moved to Preserved users . If you need to restore preserved users, see the Restoring users in the IdM Web UI . 3.7. Restoring users in the IdM Web UI IdM (Identity Management) enables you to restore preserved user accounts back to the active state. You can restore a preserved user to an active user or a stage user. Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. Go to Users Preserved users tab. Click the check-box at the user accounts you want to restore. Click on the Restore button. In the Confirmation dialog box, click on the OK button. The IdM Web UI displays a green confirmation and moves the user accounts to the Active users tab. 3.8. Deleting users in the IdM Web UI Deleting users is an irreversible operation, causing the user accounts to be permanently deleted from the IdM database, including group memberships and passwords. Any external configuration for the user, such as the system account and home directory, is not deleted, but is no longer accessible through IdM. You can delete: Active users - the IdM Web UI offers you with the options: Preserving users temporarily For details, see the Preserving active users in the IdM Web UI . Deleting them permanently Stage users - you can just delete stage users permanently. Preserved users - you can delete preserved users permanently. The following procedure describes deleting active users. Similarly, you can delete user accounts on: The Stage users tab The Preserved users tab Prerequisites Administrator privileges for managing the IdM Web UI or User Administrator role. Procedure Log in to the IdM Web UI. Go to Users Active users tab. Alternatively, you can delete the user account in the Users Stage users or Users Preserved users . Click the Delete icon. In the Remove users dialog box, switch the Delete mode radio button to delete . Click on the Delete button. The users accounts have been permanently deleted from IdM.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-accounts-using-the-idm-web-ui_managing-users-groups-hosts
3.4. Running converted virtual machines
3.4. Running converted virtual machines On successful completion, virt-v2v will create a new libvirt domain for the converted virtual machine with the same name as the original virtual machine. It can be started as usual using libvirt tools, for example virt-manager . Note virt-v2v cannot currently reconfigure a virtual machine's network configuration. If the converted virtual machine is not connected to the same subnet as the source, the virtual machine's network configuration may have to be updated manually.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-rhel-running_converted_virtual_machines
Chapter 10. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a school timetable quick start guide
Chapter 10. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a school timetable quick start guide This guide walks you through the process of creating a Red Hat build of Quarkus application with Red Hat build of OptaPlanner's constraint solving artificial intelligence (AI). You will build a REST application that optimizes a school timetable for students and teachers Your service will assign Lesson instances to Timeslot and Room instances automatically by using AI to adhere to the following hard and soft scheduling constraints : A room can have at most one lesson at the same time. A teacher can teach at most one lesson at the same time. A student can attend at most one lesson at the same time. A teacher prefers to teach in a single room. A teacher prefers to teach sequential lessons and dislikes gaps between lessons. Mathematically speaking, school timetabling is an NP-hard problem. That means it is difficult to scale. Simply iterating through all possible combinations with brute force would take millions of years for a non-trivial data set, even on a supercomputer. Fortunately, AI constraint solvers such as Red Hat build of OptaPlanner have advanced algorithms that deliver a near-optimal solution in a reasonable amount of time. What is considered to be a reasonable amount of time is subjective and depends on the goals of your problem. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. An IDE, such as IntelliJ IDEA, VS Code, Eclipse, or NetBeans is available. 10.1. Creating an OptaPlanner Red Hat build of Quarkus Maven project using the Maven plug-in You can get up and running with a Red Hat build of OptaPlanner and Quarkus application using Apache Maven and the Quarkus Maven plug-in. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. Procedure In a command terminal, enter the following command to verify that Maven is using JDK 11 and that the Maven version is 3.6 or higher: If the preceding command does not return JDK 11, add the path to JDK 11 to the PATH environment variable and enter the preceding command again. To generate a Quarkus OptaPlanner quickstart project, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=optaplanner-quickstart \ -Dextensions="resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson" \ -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 \ -DnoExamples This command create the following elements in the ./optaplanner-quickstart directory: The Maven structure Example Dockerfile file in src/main/docker The application configuration file Table 10.1. Properties used in the mvn io.quarkus:quarkus-maven-plugin:2.13.Final-redhat-00006:create command Property Description projectGroupId The group ID of the project. projectArtifactId The artifact ID of the project. extensions A comma-separated list of Quarkus extensions to use with this project. For a full list of Quarkus extensions, enter mvn quarkus:list-extensions on the command line. noExamples Creates a project with the project structure but without tests or classes. The values of the projectGroupID and the projectArtifactID properties are used to generate the project version. The default project version is 1.0.0-SNAPSHOT . To view your OptaPlanner project, change directory to the OptaPlanner Quickstarts directory: Review the pom.xml file. The content should be similar to the following example: 10.2. Model the domain objects The goal of the Red Hat build of OptaPlanner timetable project is to assign each lesson to a time slot and a room. To do this, add three classes, Timeslot , Lesson , and Room , as shown in the following diagram: Timeslot The Timeslot class represents a time interval when lessons are taught, for example, Monday 10:30 - 11:30 or Tuesday 13:30 - 14:30 . In this example, all time slots have the same duration and there are no time slots during lunch or other breaks. A time slot has no date because a high school schedule just repeats every week. There is no need for continuous planning . A timeslot is called a problem fact because no Timeslot instances change during solving. Such classes do not require any OptaPlanner-specific annotations. Room The Room class represents a location where lessons are taught, for example, Room A or Room B . In this example, all rooms are without capacity limits and they can accommodate all lessons. Room instances do not change during solving so Room is also a problem fact . Lesson During a lesson, represented by the Lesson class, a teacher teaches a subject to a group of students, for example, Math by A.Turing for 9th grade or Chemistry by M.Curie for 10th grade . If a subject is taught multiple times each week by the same teacher to the same student group, there are multiple Lesson instances that are only distinguishable by id . For example, the 9th grade has six math lessons a week. During solving, OptaPlanner changes the timeslot and room fields of the Lesson class to assign each lesson to a time slot and a room. Because OptaPlanner changes these fields, Lesson is a planning entity : Most of the fields in the diagram contain input data, except for the orange fields. A lesson's timeslot and room fields are unassigned ( null ) in the input data and assigned (not null ) in the output data. OptaPlanner changes these fields during solving. Such fields are called planning variables. In order for OptaPlanner to recognize them, both the timeslot and room fields require an @PlanningVariable annotation. Their containing class, Lesson , requires an @PlanningEntity annotation. Procedure Create the src/main/java/com/example/domain/Timeslot.java class: package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + " " + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } } Notice the toString() method keeps the output short so it is easier to read OptaPlanner's DEBUG or TRACE log, as shown later. Create the src/main/java/com/example/domain/Room.java class: package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } } Create the src/main/java/com/example/domain/Lesson.java class: package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = "timeslotRange") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = "roomRange") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + "(" + id + ")"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } } The Lesson class has an @PlanningEntity annotation, so OptaPlanner knows that this class changes during solving because it contains one or more planning variables. The timeslot field has an @PlanningVariable annotation, so OptaPlanner knows that it can change its value. In order to find potential Timeslot instances to assign to this field, OptaPlanner uses the valueRangeProviderRefs property to connect to a value range provider that provides a List<Timeslot> to pick from. See Section 10.4, "Gather the domain objects in a planning solution" for information about value range providers. The room field also has an @PlanningVariable annotation for the same reasons. 10.3. Define the constraints and calculate the score When solving a problem, a score represents the quality of a specific solution. The higher the score the better. Red Hat build of OptaPlanner looks for the best solution, which is the solution with the highest score found in the available time. It might be the optimal solution. Because the timetable example use case has hard and soft constraints, use the HardSoftScore class to represent the score: Hard constraints must not be broken. For example: A room can have at most one lesson at the same time. Soft constraints should not be broken. For example: A teacher prefers to teach in a single room. Hard constraints are weighted against other hard constraints. Soft constraints are weighted against other soft constraints. Hard constraints always outweigh soft constraints, regardless of their respective weights. To calculate the score, you could implement an EasyScoreCalculator class: public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the "complete" implementation return HardSoftScore.of(hardScore, softScore); } } Unfortunately, this solution does not scale well because it is non-incremental: every time a lesson is assigned to a different time slot or room, all lessons are re-evaluated to calculate the new score. A better solution is to create a src/main/java/com/example/solver/TimeTableConstraintProvider.java class to perform incremental score calculation. This class uses OptaPlanner's ConstraintStream API which is inspired by Java 8 Streams and SQL. The ConstraintProvider scales an order of magnitude better than the EasyScoreCalculator : O (n) instead of O (n2). Procedure Create the following src/main/java/com/example/solver/TimeTableConstraintProvider.java class: package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the "complete" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson ... return constraintFactory.from(Lesson.class) // ... and pair it with another lesson ... .join(Lesson.class, // ... in the same timeslot ... Joiners.equal(Lesson::getTimeslot), // ... in the same room ... Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize("Room conflict", HardSoftScore.ONE_HARD); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize("Teacher conflict", HardSoftScore.ONE_HARD); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize("Student group conflict", HardSoftScore.ONE_HARD); } } 10.4. Gather the domain objects in a planning solution A TimeTable instance wraps all Timeslot , Room , and Lesson instances of a single dataset. Furthermore, because it contains all lessons, each with a specific planning variable state, it is a planning solution and it has a score: If lessons are still unassigned, then it is an uninitialized solution, for example, a solution with the score -4init/0hard/0soft . If it breaks hard constraints, then it is an infeasible solution, for example, a solution with the score -2hard/-3soft . If it adheres to all hard constraints, then it is a feasible solution, for example, a solution with the score 0hard/-7soft . The TimeTable class has an @PlanningSolution annotation, so Red Hat build of OptaPlanner knows that this class contains all of the input and output data. Specifically, this class is the input of the problem: A timeslotList field with all time slots This is a list of problem facts, because they do not change during solving. A roomList field with all rooms This is a list of problem facts, because they do not change during solving. A lessonList field with all lessons This is a list of planning entities because they change during solving. Of each Lesson : The values of the timeslot and room fields are typically still null , so unassigned. They are planning variables. The other fields, such as subject , teacher and studentGroup , are filled in. These fields are problem properties. However, this class is also the output of the solution: A lessonList field for which each Lesson instance has non-null timeslot and room fields after solving A score field that represents the quality of the output solution, for example, 0hard/-5soft Procedure Create the src/main/java/com/example/domain/TimeTable.java class: package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = "timeslotRange") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = "roomRange") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } } The value range providers The timeslotList field is a value range provider. It holds the Timeslot instances which OptaPlanner can pick from to assign to the timeslot field of Lesson instances. The timeslotList field has an @ValueRangeProvider annotation to connect those two, by matching the id with the valueRangeProviderRefs of the @PlanningVariable in the Lesson . Following the same logic, the roomList field also has an @ValueRangeProvider annotation. The problem fact and planning entity properties Furthermore, OptaPlanner needs to know which Lesson instances it can change as well as how to retrieve the Timeslot and Room instances used for score calculation by your TimeTableConstraintProvider . The timeslotList and roomList fields have an @ProblemFactCollectionProperty annotation, so your TimeTableConstraintProvider can select from those instances. The lessonList has an @PlanningEntityCollectionProperty annotation, so OptaPlanner can change them during solving and your TimeTableConstraintProvider can select from those too. 10.5. Create the solver service Solving planning problems on REST threads causes HTTP timeout issues. Therefore, the Quarkus extension injects a SolverManager, which runs solvers in a separate thread pool and can solve multiple data sets in parallel. Procedure Create the src/main/java/org/acme/optaplanner/rest/TimeTableResource.java class: package org.acme.optaplanner.rest; import java.util.UUID; import java.util.concurrent.ExecutionException; import javax.inject.Inject; import javax.ws.rs.POST; import javax.ws.rs.Path; import org.acme.optaplanner.domain.TimeTable; import org.optaplanner.core.api.solver.SolverJob; import org.optaplanner.core.api.solver.SolverManager; @Path("/timeTable") public class TimeTableResource { @Inject SolverManager<TimeTable, UUID> solverManager; @POST @Path("/solve") public TimeTable solve(TimeTable problem) { UUID problemId = UUID.randomUUID(); // Submit the problem to start solving SolverJob<TimeTable, UUID> solverJob = solverManager.solve(problemId, problem); TimeTable solution; try { // Wait until the solving ends solution = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw new IllegalStateException("Solving failed.", e); } return solution; } } This initial implementation waits for the solver to finish, which can still cause an HTTP timeout. The complete implementation avoids HTTP timeouts much more elegantly. 10.6. Set the solver termination time If your planning application does not have a termination setting or a termination event, it theoretically runs forever and in reality eventually causes an HTTP timeout error. To prevent this from occurring, use the optaplanner.solver.termination.spent-limit parameter to specify the length of time after which the application terminates. In most applications, set the time to at least five minutes ( 5m ). However, in the Timetable example, limit the solving time to five seconds, which is short enough to avoid the HTTP timeout. Procedure Create the src/main/resources/application.properties file with the following content: quarkus.optaplanner.solver.termination.spent-limit=5s 10.7. Running the school timetable application After you have created the school timetable project, run it in development mode. In development mode, you can update the application sources and configurations while your application is running. Your changes will appear in the running application. Prerequisites You have created the school timetable project. Procedure To compile the application in development mode, enter the following command from the project directory: Test the REST service. You can use any REST client. The following example uses the Linux command curl to send a POST request: After the time period specified in termination spent time defined in your application.properties file, the service returns output similar to the following example: Notice that your application assigned all four lessons to one of the two time slots and one of the two rooms. Also notice that it conforms to all hard constraints. For example, M. Curie's two lessons are in different time slots. To review what OptaPlanner did during the solving time, review the info log on the server side. The following is sample info log output: 10.8. Testing the application A good application includes test coverage. Test the constraints and the solver in your timetable project. 10.8.1. Test the school timetable constraints To test each constraint of the timetable project in isolation, use a ConstraintVerifier in unit tests. This tests each constraint's corner cases in isolation from the other tests, which lowers maintenance when adding a new constraint with proper test coverage. This test verifies that the constraint TimeTableConstraintProvider::roomConflict , when given three lessons in the same room and two of the lessons have the same timeslot, penalizes with a match weight of 1. So if the constraint weight is 10hard it reduces the score by -10hard . Procedure Create the src/test/java/org/acme/optaplanner/solver/TimeTableConstraintProviderTest.java class: package org.acme.optaplanner.solver; import java.time.DayOfWeek; import java.time.LocalTime; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.junit.jupiter.api.Test; import org.optaplanner.test.api.score.stream.ConstraintVerifier; @QuarkusTest class TimeTableConstraintProviderTest { private static final Room ROOM = new Room("Room1"); private static final Timeslot TIMESLOT1 = new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9,0), LocalTime.NOON); private static final Timeslot TIMESLOT2 = new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9,0), LocalTime.NOON); @Inject ConstraintVerifier<TimeTableConstraintProvider, TimeTable> constraintVerifier; @Test void roomConflict() { Lesson firstLesson = new Lesson(1, "Subject1", "Teacher1", "Group1"); Lesson conflictingLesson = new Lesson(2, "Subject2", "Teacher2", "Group2"); Lesson nonConflictingLesson = new Lesson(3, "Subject3", "Teacher3", "Group3"); firstLesson.setRoom(ROOM); firstLesson.setTimeslot(TIMESLOT1); conflictingLesson.setRoom(ROOM); conflictingLesson.setTimeslot(TIMESLOT1); nonConflictingLesson.setRoom(ROOM); nonConflictingLesson.setTimeslot(TIMESLOT2); constraintVerifier.verifyThat(TimeTableConstraintProvider::roomConflict) .given(firstLesson, conflictingLesson, nonConflictingLesson) .penalizesBy(1); } } Notice how ConstraintVerifier ignores the constraint weight during testing even if those constraint weights are hardcoded in the ConstraintProvider . This is because constraint weights change regularly before going into production. This way, constraint weight tweaking does not break the unit tests. 10.8.2. Test the school timetable solver This example tests the Red Hat build of OptaPlanner school timetable project on Red Hat build of Quarkus. It uses a JUnit test to generate a test data set and send it to the TimeTableController to solve. Procedure Create the src/test/java/com/example/rest/TimeTableResourceTest.java class with the following content: package com.exmaple.optaplanner.rest; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import com.exmaple.optaplanner.domain.Room; import com.exmaple.optaplanner.domain.Timeslot; import com.exmaple.optaplanner.domain.Lesson; import com.exmaple.optaplanner.domain.TimeTable; import com.exmaple.optaplanner.rest.TimeTableResource; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableResource.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room("Room A")); roomList.add(new Room("Room B")); roomList.add(new Room("Room C")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, "Math", "B. May", "9th grade")); lessonList.add(new Lesson(102L, "Physics", "M. Curie", "9th grade")); lessonList.add(new Lesson(103L, "Geography", "M. Polo", "9th grade")); lessonList.add(new Lesson(104L, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(105L, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(201L, "Math", "B. May", "10th grade")); lessonList.add(new Lesson(202L, "Chemistry", "M. Curie", "10th grade")); lessonList.add(new Lesson(203L, "History", "I. Jones", "10th grade")); lessonList.add(new Lesson(204L, "English", "P. Cruz", "10th grade")); lessonList.add(new Lesson(205L, "French", "M. Curie", "10th grade")); return new TimeTable(timeslotList, roomList, lessonList); } } This test verifies that after solving, all lessons are assigned to a time slot and a room. It also verifies that it found a feasible solution (no hard constraints broken). Add test properties to the src/main/resources/application.properties file: Normally, the solver finds a feasible solution in less than 200 milliseconds. Notice how the application.properties file overwrites the solver termination during tests to terminate as soon as a feasible solution (0hard/*soft) is found. This avoids hard coding a solver time, because the unit test might run on arbitrary hardware. This approach ensures that the test runs long enough to find a feasible solution, even on slow systems. But it does not run a millisecond longer than it strictly must, even on fast systems. 10.9. Logging After you complete the Red Hat build of OptaPlanner school timetable project, you can use logging information to help you fine-tune the constraints in the ConstraintProvider . Review the score calculation speed in the info log file to assess the impact of changes to your constraints. Run the application in debug mode to show every step that your application takes or use trace logging to log every step and every move. Procedure Run the school timetable application for a fixed amount of time, for example, five minutes. Review the score calculation speed in the log file as shown in the following example: Change a constraint, run the planning application again for the same amount of time, and review the score calculation speed recorded in the log file. Run the application in debug mode to log every step that the application makes: To run debug mode from the command line, use the -D system property. To permanently enable debug mode, add the following line to the application.properties file: quarkus.log.category."org.optaplanner".level=debug The following example shows output in the log file in debug mode: Use trace logging to show every step and every move for each step. 10.10. Integrating a database with your Quarkus OptaPlanner school timetable application After you create your Quarkus OptaPlanner school timetable application, you can integrate it with a database and create a web-based user interface to display the timetable. Prerequisites You have a Quarkus OptaPlanner school timetable application. Procedure Use Hibernate and Panache to store Timeslot , Room , and Lesson instances in a database. See Simplified Hibernate ORM with Panache for more information. Expose the instances through REST. For information, see Writing JSON REST Services . Update the TimeTableResource class to read and write a TimeTable instance in a single transaction: This example includes a TimeTable instance. However, you can enable multi-tenancy and handle TimeTable instances for multiple schools in parallel. The getTimeTable() method returns the latest timetable from the database. It uses the ScoreManager method, which is automatically injected, to calculate the score of that timetable and make it available to the UI. The solve() method starts a job to solve the current timetable and stores the time slot and room assignments in the database. It uses the SolverManager.solveAndListen() method to listen to intermediate best solutions and update the database accordingly. The UI uses this to show progress while the backend is still solving. Update the TimeTableResourceTest class to reflect that the solve() method returns immediately and to poll for the latest solution until the solver finishes solving: Build a web UI on top of these REST methods to provide a visual representation of the timetable. Review the quickstart source code . 10.11. Using Micrometer and Prometheus to monitor your school timetable OptaPlanner Quarkus application OptaPlanner exposes metrics through Micrometer , a metrics instrumentation library for Java applications. You can use Micrometer with Prometheus to monitor the OptaPlanner solver in the school timetable application. Prerequisites You have created the Quarkus OptaPlanner school timetable application. Prometheus is installed. For information about installing Prometheus, see the Prometheus website. Procedure Add the Micrometer Prometheus dependency to the school timetable pom.xml file: Start the school timetable application: Open http://localhost:8080/q/metric in a web browser.
[ "mvn --version", "mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create -DprojectGroupId=com.example -DprojectArtifactId=optaplanner-quickstart -Dextensions=\"resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson\" -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 -DnoExamples", "cd optaplanner-quickstart", "<dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-optaplanner-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> </dependencies>", "package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + \" \" + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } }", "package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } }", "package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = \"timeslotRange\") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = \"roomRange\") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + \"(\" + id + \")\"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } }", "public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the \"complete\" implementation return HardSoftScore.of(hardScore, softScore); } }", "package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the \"complete\" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson return constraintFactory.from(Lesson.class) // ... and pair it with another lesson .join(Lesson.class, // ... in the same timeslot Joiners.equal(Lesson::getTimeslot), // ... in the same room Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize(\"Room conflict\", HardSoftScore.ONE_HARD); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize(\"Teacher conflict\", HardSoftScore.ONE_HARD); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.from(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize(\"Student group conflict\", HardSoftScore.ONE_HARD); } }", "package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = \"timeslotRange\") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = \"roomRange\") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } }", "package org.acme.optaplanner.rest; import java.util.UUID; import java.util.concurrent.ExecutionException; import javax.inject.Inject; import javax.ws.rs.POST; import javax.ws.rs.Path; import org.acme.optaplanner.domain.TimeTable; import org.optaplanner.core.api.solver.SolverJob; import org.optaplanner.core.api.solver.SolverManager; @Path(\"/timeTable\") public class TimeTableResource { @Inject SolverManager<TimeTable, UUID> solverManager; @POST @Path(\"/solve\") public TimeTable solve(TimeTable problem) { UUID problemId = UUID.randomUUID(); // Submit the problem to start solving SolverJob<TimeTable, UUID> solverJob = solverManager.solve(problemId, problem); TimeTable solution; try { // Wait until the solving ends solution = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw new IllegalStateException(\"Solving failed.\", e); } return solution; } }", "quarkus.optaplanner.solver.termination.spent-limit=5s", "./mvnw compile quarkus:dev", "curl -i -X POST http://localhost:8080/timeTable/solve -H \"Content-Type:application/json\" -d '{\"timeslotList\":[{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"}],\"roomList\":[{\"name\":\"Room A\"},{\"name\":\"Room B\"}],\"lessonList\":[{\"id\":1,\"subject\":\"Math\",\"teacher\":\"A. Turing\",\"studentGroup\":\"9th grade\"},{\"id\":2,\"subject\":\"Chemistry\",\"teacher\":\"M. Curie\",\"studentGroup\":\"9th grade\"},{\"id\":3,\"subject\":\"French\",\"teacher\":\"M. Curie\",\"studentGroup\":\"10th grade\"},{\"id\":4,\"subject\":\"History\",\"teacher\":\"I. Jones\",\"studentGroup\":\"10th grade\"}]}'", "HTTP/1.1 200 Content-Type: application/json {\"timeslotList\":...,\"roomList\":...,\"lessonList\":[{\"id\":1,\"subject\":\"Math\",\"teacher\":\"A. Turing\",\"studentGroup\":\"9th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},\"room\":{\"name\":\"Room A\"}},{\"id\":2,\"subject\":\"Chemistry\",\"teacher\":\"M. Curie\",\"studentGroup\":\"9th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"},\"room\":{\"name\":\"Room A\"}},{\"id\":3,\"subject\":\"French\",\"teacher\":\"M. Curie\",\"studentGroup\":\"10th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},\"room\":{\"name\":\"Room B\"}},{\"id\":4,\"subject\":\"History\",\"teacher\":\"I. Jones\",\"studentGroup\":\"10th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"},\"room\":{\"name\":\"Room B\"}}],\"score\":\"0hard/0soft\"}", "... Solving started: time spent (33), best score (-8init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... Construction Heuristic phase (0) ended: time spent (73), best score (0hard/0soft), score calculation speed (459/sec), step total (4). ... Local Search phase (1) ended: time spent (5000), best score (0hard/0soft), score calculation speed (28949/sec), step total (28398). ... Solving ended: time spent (5000), best score (0hard/0soft), score calculation speed (28524/sec), phase total (2), environment mode (REPRODUCIBLE).", "package org.acme.optaplanner.solver; import java.time.DayOfWeek; import java.time.LocalTime; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.junit.jupiter.api.Test; import org.optaplanner.test.api.score.stream.ConstraintVerifier; @QuarkusTest class TimeTableConstraintProviderTest { private static final Room ROOM = new Room(\"Room1\"); private static final Timeslot TIMESLOT1 = new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9,0), LocalTime.NOON); private static final Timeslot TIMESLOT2 = new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9,0), LocalTime.NOON); @Inject ConstraintVerifier<TimeTableConstraintProvider, TimeTable> constraintVerifier; @Test void roomConflict() { Lesson firstLesson = new Lesson(1, \"Subject1\", \"Teacher1\", \"Group1\"); Lesson conflictingLesson = new Lesson(2, \"Subject2\", \"Teacher2\", \"Group2\"); Lesson nonConflictingLesson = new Lesson(3, \"Subject3\", \"Teacher3\", \"Group3\"); firstLesson.setRoom(ROOM); firstLesson.setTimeslot(TIMESLOT1); conflictingLesson.setRoom(ROOM); conflictingLesson.setTimeslot(TIMESLOT1); nonConflictingLesson.setRoom(ROOM); nonConflictingLesson.setTimeslot(TIMESLOT2); constraintVerifier.verifyThat(TimeTableConstraintProvider::roomConflict) .given(firstLesson, conflictingLesson, nonConflictingLesson) .penalizesBy(1); } }", "package com.exmaple.optaplanner.rest; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import com.exmaple.optaplanner.domain.Room; import com.exmaple.optaplanner.domain.Timeslot; import com.exmaple.optaplanner.domain.Lesson; import com.exmaple.optaplanner.domain.TimeTable; import com.exmaple.optaplanner.rest.TimeTableResource; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableResource.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room(\"Room A\")); roomList.add(new Room(\"Room B\")); roomList.add(new Room(\"Room C\")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, \"Math\", \"B. May\", \"9th grade\")); lessonList.add(new Lesson(102L, \"Physics\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(103L, \"Geography\", \"M. Polo\", \"9th grade\")); lessonList.add(new Lesson(104L, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(105L, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(201L, \"Math\", \"B. May\", \"10th grade\")); lessonList.add(new Lesson(202L, \"Chemistry\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(203L, \"History\", \"I. Jones\", \"10th grade\")); lessonList.add(new Lesson(204L, \"English\", \"P. Cruz\", \"10th grade\")); lessonList.add(new Lesson(205L, \"French\", \"M. Curie\", \"10th grade\")); return new TimeTable(timeslotList, roomList, lessonList); } }", "The solver runs only for 5 seconds to avoid a HTTP timeout in this simple implementation. It's recommended to run for at least 5 minutes (\"5m\") otherwise. quarkus.optaplanner.solver.termination.spent-limit=5s Effectively disable this termination in favor of the best-score-limit %test.quarkus.optaplanner.solver.termination.spent-limit=1h %test.quarkus.optaplanner.solver.termination.best-score-limit=0hard/*soft", "... Solving ended: ..., score calculation speed (29455/sec),", "quarkus.log.category.\"org.optaplanner\".level=debug", "... Solving started: time spent (67), best score (-20init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... CH step (0), time spent (128), score (-18init/0hard/0soft), selected move count (15), picked move ([Math(101) {null -> Room A}, Math(101) {null -> MONDAY 08:30}]). ... CH step (1), time spent (145), score (-16init/0hard/0soft), selected move count (15), picked move ([Physics(102) {null -> Room A}, Physics(102) {null -> MONDAY 09:30}]).", "package org.acme.optaplanner.rest; import javax.inject.Inject; import javax.transaction.Transactional; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.Path; import io.quarkus.panache.common.Sort; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.optaplanner.core.api.score.ScoreManager; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.solver.SolverManager; import org.optaplanner.core.api.solver.SolverStatus; @Path(\"/timeTable\") public class TimeTableResource { public static final Long SINGLETON_TIME_TABLE_ID = 1L; @Inject SolverManager<TimeTable, Long> solverManager; @Inject ScoreManager<TimeTable, HardSoftScore> scoreManager; // To try, open http://localhost:8080/timeTable @GET public TimeTable getTimeTable() { // Get the solver status before loading the solution // to avoid the race condition that the solver terminates between them SolverStatus solverStatus = getSolverStatus(); TimeTable solution = findById(SINGLETON_TIME_TABLE_ID); scoreManager.updateScore(solution); // Sets the score solution.setSolverStatus(solverStatus); return solution; } @POST @Path(\"/solve\") public void solve() { solverManager.solveAndListen(SINGLETON_TIME_TABLE_ID, this::findById, this::save); } public SolverStatus getSolverStatus() { return solverManager.getSolverStatus(SINGLETON_TIME_TABLE_ID); } @POST @Path(\"/stopSolving\") public void stopSolving() { solverManager.terminateEarly(SINGLETON_TIME_TABLE_ID); } @Transactional protected TimeTable findById(Long id) { if (!SINGLETON_TIME_TABLE_ID.equals(id)) { throw new IllegalStateException(\"There is no timeTable with id (\" + id + \").\"); } // Occurs in a single transaction, so each initialized lesson references the same timeslot/room instance // that is contained by the timeTable's timeslotList/roomList. return new TimeTable( Timeslot.listAll(Sort.by(\"dayOfWeek\").and(\"startTime\").and(\"endTime\").and(\"id\")), Room.listAll(Sort.by(\"name\").and(\"id\")), Lesson.listAll(Sort.by(\"subject\").and(\"teacher\").and(\"studentGroup\").and(\"id\"))); } @Transactional protected void save(TimeTable timeTable) { for (Lesson lesson : timeTable.getLessonList()) { // TODO this is awfully naive: optimistic locking causes issues if called by the SolverManager Lesson attachedLesson = Lesson.findById(lesson.getId()); attachedLesson.setTimeslot(lesson.getTimeslot()); attachedLesson.setRoom(lesson.getRoom()); } } }", "package org.acme.optaplanner.rest; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.TimeTable; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import org.optaplanner.core.api.solver.SolverStatus; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solveDemoDataUntilFeasible() throws InterruptedException { timeTableResource.solve(); TimeTable timeTable = timeTableResource.getTimeTable(); while (timeTable.getSolverStatus() != SolverStatus.NOT_SOLVING) { // Quick polling (not a Test Thread Sleep anti-pattern) // Test is still fast on fast machines and doesn't randomly fail on slow machines. Thread.sleep(20L); timeTable = timeTableResource.getTimeTable(); } assertFalse(timeTable.getLessonList().isEmpty()); for (Lesson lesson : timeTable.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(timeTable.getScore().isFeasible()); } }", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency>", "mvn compile quarkus:dev" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/assembly-optaplanner-school-timetable-quarkus_optaplanner-quickstarts
Chapter 1. Understanding API tiers
Chapter 1. Understanding API tiers Important This guidance does not cover layered OpenShift Container Platform offerings. API tiers for bare-metal configurations also apply to virtualized configurations except for any feature that directly interacts with hardware. Those features directly related to hardware have no application operating environment (AOE) compatibility level beyond that which is provided by the hardware vendor. For example, applications that rely on Graphics Processing Units (GPU) features are subject to the AOE compatibility provided by the GPU vendor driver. API tiers in a cloud environment for cloud specific integration points have no API or AOE compatibility level beyond that which is provided by the hosting cloud vendor. For example, APIs that exercise dynamic management of compute, ingress, or storage are dependent upon the underlying API capabilities exposed by the cloud platform. Where a cloud vendor modifies a prerequisite API, Red Hat will provide commercially reasonable efforts to maintain support for the API with the capability presently offered by the cloud infrastructure vendor. Red Hat requests that application developers validate that any behavior they depend on is explicitly defined in the formal API documentation to prevent introducing dependencies on unspecified implementation-specific behavior or dependencies on bugs in a particular implementation of an API. For example, new releases of an ingress router may not be compatible with older releases if an application uses an undocumented API or relies on undefined behavior. 1.1. API tiers All commercially supported APIs, components, and features are associated under one of the following support levels: API tier 1 APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. API tier 2 APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer. API tier 3 This level applies to languages, tools, applications, and optional Operators included with OpenShift Container Platform through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however. Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component. API tier 4 No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support. It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for use by actors external to the Operator and are intended to be hidden. If any CRD is not meant for use by actors external to the Operator, the operators.operatorframework.io/internal-objects annotation in the Operators ClusterServiceVersion (CSV) should be specified to signal that the corresponding resource is internal use only and the CRD may be explicitly labeled as tier 4. 1.2. Mapping API tiers to API groups For each API tier defined by Red Hat, we provide a mapping table for specific API groups where the upstream communities are committed to maintain forward compatibility. Any API group that does not specify an explicit compatibility level and is not specifically discussed below is assigned API tier 3 by default except for v1alpha1 APIs which are assigned tier 4 by default. 1.2.1. Support for Kubernetes API groups API groups that end with the suffix *.k8s.io or have the form version.<name> with no suffix are governed by the Kubernetes deprecation policy and follow a general mapping between API version exposed and corresponding support tier unless otherwise specified. API version example API tier v1 Tier 1 v1beta1 Tier 2 v1alpha1 Tier 4 1.2.2. Support for OpenShift API groups API groups that end with the suffix *.openshift.io are governed by the OpenShift Container Platform deprecation policy and follow a general mapping between API version exposed and corresponding compatibility level unless otherwise specified. API version example API tier apps.openshift.io/v1 Tier 1 authorization.openshift.io/v1 Tier 1, some tier 1 deprecated build.openshift.io/v1 Tier 1, some tier 1 deprecated config.openshift.io/v1 Tier 1 image.openshift.io/v1 Tier 1 network.openshift.io/v1 Tier 1 network.operator.openshift.io/v1 Tier 1 oauth.openshift.io/v1 Tier 1 imagecontentsourcepolicy.operator.openshift.io/v1alpha1 Tier 1 project.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 route.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 security.openshift.io/v1 Tier 1 except for RangeAllocation (tier 4) and *Reviews (tier 2) template.openshift.io/v1 Tier 1 console.openshift.io/v1 Tier 2 1.2.3. Support for Monitoring API groups API groups that end with the suffix monitoring.coreos.com have the following mapping: API version example API tier v1 Tier 1 v1alpha1 Tier 1 v1beta1 Tier 1 1.2.4. Support for Operator Lifecycle Manager API groups Operator Lifecycle Manager (OLM) provides APIs that include API groups with the suffix operators.coreos.com . These APIs have the following mapping: API version example API tier v2 Tier 1 v1 Tier 1 v1alpha1 Tier 1 1.3. API deprecation policy OpenShift Container Platform is composed of many components sourced from many upstream communities. It is anticipated that the set of components, the associated API interfaces, and correlated features will evolve over time and might require formal deprecation in order to remove the capability. 1.3.1. Deprecating parts of the API OpenShift Container Platform is a distributed system where multiple components interact with a shared state managed by the cluster control plane through a set of structured APIs. Per Kubernetes conventions, each API presented by OpenShift Container Platform is associated with a group identifier and each API group is independently versioned. Each API group is managed in a distinct upstream community including Kubernetes, Metal3, Multus, Operator Framework, Open Cluster Management, OpenShift itself, and more. While each upstream community might define their own unique deprecation policy for a given API group and version, Red Hat normalizes the community specific policy to one of the compatibility levels defined prior based on our integration in and awareness of each upstream community to simplify end-user consumption and support. The deprecation policy and schedule for APIs vary by compatibility level. The deprecation policy covers all elements of the API including: REST resources, also known as API objects Fields of REST resources Annotations on REST resources, excluding version-specific qualifiers Enumerated or constant values Other than the most recent API version in each group, older API versions must be supported after their announced deprecation for a duration of no less than: API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. The following rules apply to all tier 1 APIs: API elements can only be removed by incrementing the version of the group. API objects must be able to round-trip between API versions without information loss, with the exception of whole REST resources that do not exist in some versions. In cases where equivalent fields do not exist between versions, data will be preserved in the form of annotations during conversion. API versions in a given group can not deprecate until a new API version at least as stable is released, except in cases where the entire API object is being removed. 1.3.2. Deprecating CLI elements Client-facing CLI commands are not versioned in the same way as the API, but are user-facing component systems. The two major ways a user interacts with a CLI are through a command or flag, which is referred to in this context as CLI elements. All CLI elements default to API tier 1 unless otherwise noted or the CLI depends on a lower tier API. Element API tier Generally available (GA) Flags and commands Tier 1 Technology Preview Flags and commands Tier 3 Developer Preview Flags and commands Tier 4 1.3.3. Deprecating an entire component The duration and schedule for deprecating an entire component maps directly to the duration associated with the highest API tier of an API exposed by that component. For example, a component that surfaced APIs with tier 1 and 2 could not be removed until the tier 1 deprecation schedule was met. API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/api_overview/understanding-api-support-tiers
Chapter 5. OpenShift Data Foundation deployed using local storage devices
Chapter 5. OpenShift Data Foundation deployed using local storage devices 5.1. Replacing operational or failed storage devices on clusters backed by local storage devices You can replace an object storage device (OSD) in OpenShift Data Foundation deployed using local storage devices on the following infrastructures: Bare metal VMware Note There might be a need to replace one or more underlying storage devices. Prerequisites Red Hat recommends that replacement devices are configured with similar infrastructure and resources to the device being replaced. Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Remove the underlying storage device from relevant worker node. Verify that relevant OSD Pod has moved to CrashLoopBackOff state. Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state for more than a few minutes, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Delete any old ocs-osd-removal jobs. Example output: Navigate to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get the Persistent Volume Claim (PVC) name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find the relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. <ocs-deviceset-name> Is the name of the relevant device based on the PVC names identified in the step. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Find the persistent volume (PV) that need to be deleted. Example output: Delete the PV. Physically add a new device to the node. Track the provisioning of PVs for the devices that match the deviceInclusionSpec . It can take a few minutes to provision the PVs. Example output: Once the PV is provisioned, a new OSD pod is automatically created for the PV. Delete the ocs-osd-removal job(s). Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verification steps Verify that there is a new OSD running. Example output: Important If the new OSD does not show as Running after a few minutes, restart the rook-ceph-operator pod to force a reconciliation. Example output: Verify that a new PVC is created. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and check the OSD status on the storage dashboard. Note A full data recovery may take longer depending on the volume of data being recovered. 5.2. Replacing operational or failed storage devices on IBM Power You can replace an object storage device (OSD) in OpenShift Data Foundation deployed using local storage devices on IBM Power. Note There might be a need to replace one or more underlying storage devices. Prerequisites Red Hat recommends that replacement devices are configured with similar infrastructure and resources to the device being replaced. Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-86bf8cdc8-4nb5t needs to be replaced and worker-0 is the RHOCP node on which the OSD is scheduled. Note The status of the pod is Running if the OSD you want to replace is healthy. Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state for more than a few minutes, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Identify the DeviceSet associated with the OSD to be replaced. Example output: In this example, the Persistent Volume Claim (PVC) name is ocs-deviceset-localblock-0-data-0-64xjl . Identify the Persistent Volume (PV) associated with the PVC. where, x , y , and pvc-suffix are the values in the DeviceSet identified in an earlier step. Example output: In this example, the associated PV is local-pv-8137c873 . Identify the name of the device to be replaced. where, pv-suffix is the value in the PV name identified in an earlier step. Example output: In this example, the device name is vdc . Identify the prepare-pod associated with the OSD to be replaced. where, x , y , and pvc-suffix are the values in the DeviceSet identified in an earlier step. Example output: In this example, the prepare-pod name is rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc . Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get the PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find the relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Find the PV that need to be deleted. Example output: Delete the PV. <pv-name> Is the name of the PV. Replace the old device and use the new device to create a new OpenShift Container Platform PV. Log in to the OpenShift Container Platform node with the device to be replaced. In this example, the OpenShift Container Platform node is worker-0 . Example output: Record the /dev/disk that is to be replaced using the device name, vdc , identified earlier. Example output: Find the name of the LocalVolume CR, and remove or comment out the device /dev/disk that is to be replaced. Example output: Example output: Make sure to save the changes after editing the CR. Log in to the OpenShift Container Platform node with the device to be replaced and remove the old symlink . Example output: Identify the old symlink for the device name to be replaced. In this example, the device name is vdc . Example output: Remove the symlink . Verify that the symlink is removed. Example output: Replace the old device with the new device. Log back into the correct OpenShift Cotainer Platform node and identify the device name for the new drive. The device name must change unless you are resetting the same device. Example output: In this example, the new device name is vdd . After the new /dev/disk is available, you can add a new disk entry to the LocalVolume CR. Edit the LocalVolume CR and add the new /dev/disk . In this example, the new device is /dev/vdd . Example output: Make sure to save the changes after editing the CR. Verify that there is a new PV in Available state and of the correct size. Example output: Create a new OSD for the new device. Deploy the new OSD. You need to restart the rook-ceph-operator to force operator reconciliation. Identify the name of the rook-ceph-operator . Example output: Delete the rook-ceph-operator . Example output: In this example, the rook-ceph-operator pod name is rook-ceph-operator-85f6494db4-sg62v . Verify that the rook-ceph-operator pod is restarted. Example output: Creation of the new OSD may take several minutes after the operator restarts. Delete the ocs-osd-removal job(s). Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verfication steps Verify that there is a new OSD running. Example output: Verify that a new PVC is created. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the previously identified nodes, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and check the status card in the OpenShift Data Foundation dashboard under Storage section. Note A full data recovery may take longer depending on the volume of data being recovered. 5.3. Replacing operational or failed storage devices on IBM Z or IBM LinuxONE infrastructure You can replace operational or failed storage devices on IBM Z or IBM(R) LinuxONE infrastructure with new Small Computer System Interface (SCSI) disks. IBM Z or IBM(R) LinuxONE supports SCSI FCP disk logical units (SCSI disks) as persistent storage devices from external disk storage. You can identify a SCSI disk using its FCP Device number, two target worldwide port names (WWPN1 and WWPN2), and the logical unit number (LUN). For more information, see https://www.ibm.com/support/knowledgecenter/SSB27U_6.4.0/com.ibm.zvm.v640.hcpa5/scsiover.html Prerequisites Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure List all the disks. Example output: A SCSI disk is represented as a zfcp-lun with the structure <device-id>:<wwpn>:<lun-id> in the ID section. The first disk is used for the operating system. If one storage device fails, you can replace it with a new disk. Remove the disk. Run the following command on the disk, replacing scsi-id with the SCSI disk identifier of the disk to be replaced: For example, the following command removes one disk with the device ID 0.0.8204 , the WWPN 0x500507630a0b50a4 , and the LUN 0x4002403000000000 : Append a new SCSI disk. Note The device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID. List all the FCP devices to verify the new disk is configured. Example output:
[ "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>", "osd_id_to_remove=0", "oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found in openshift-storage namespace.", "oc delete -n openshift-storage pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --grace-period=0 --force", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/ <node name>", "chroot /host", "dmsetup ls| grep <pvc name>", "ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose <ocs-deviceset-name>", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc get pv -L kubernetes.io/hostname | grep <storageclass-name> | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv <pv_name>", "oc -n openshift-local-storage describe localvolumeset <lvs-name>", "[...] Status: Conditions: Last Transition Time: 2020-11-17T05:03:32Z Message: DiskMaker: Available, LocalProvisioner: Available Status: True Type: DaemonSetsAvailable Last Transition Time: 2020-11-17T05:03:34Z Message: Operator reconciled successfully. Status: True Type: Available Observed Generation: 1 Total Provisioned Device Count: 4 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Discovered 2m30s (x4 localvolumeset- node.example.com - NewDevice over 2m30s) symlink-controller found possible matching disk, waiting 1m to claim Normal FoundMatch 89s (x4 localvolumeset- node.example.com - ingDisk over 89s) symlink-controller symlinking matching disk", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h", "oc delete pod -n openshift-storage -l app=rook-ceph-operator", "pod \"rook-ceph-operator-6f74fb5bff-2d982\" deleted", "oc get -n openshift-storage pvc | grep <lvs-name>", "ocs-deviceset-0-0-c2mqb Bound local-pv-b481410 1490Gi RWO localblock 5m ocs-deviceset-1-0-959rp Bound local-pv-414755e0 1490Gi RWO localblock 1d20h ocs-deviceset-2-0-79j94 Bound local-pv-3e8964d3 1490Gi RWO localblock 1d20h", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node name>", "chroot /host", "lsblk", "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-86bf8cdc8-4nb5t 0/1 crashLoopBackOff 0 24h 10.129.2.26 worker-0 <none> <none> rook-ceph-osd-1-7c99657cfb-jdzvz 1/1 Running 0 24h 10.128.2.46 worker-1 <none> <none> rook-ceph-osd-2-5f9f6dfb5b-2mnw9 1/1 Running 0 24h 10.131.0.33 worker-2 <none> <none>", "osd_id_to_remove=0", "oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found in openshift-storage namespace.", "oc delete -n openshift-storage pod rook-ceph-osd-0-86bf8cdc8-4nb5t --grace-period=0 --force", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-86bf8cdc8-4nb5t\" force deleted", "oc get -n openshift-storage -o yaml deployment rook-ceph-osd-USD{osd_id_to_remove} | grep ceph.rook.io/pvc", "ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl", "oc get -n openshift-storage pvc ocs-deviceset- <x> - <y> - <pvc-suffix>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-localblock-0-data-0-64xjl Bound local-pv-8137c873 256Gi RWO localblock 24h", "oc get pv local-pv- <pv-suffix> -o yaml | grep path", "path: /mnt/local-storage/localblock/vdc", "oc describe -n openshift-storage pvc ocs-deviceset- <x> - <y> - <pvc-suffix> | grep Used", "Used By: rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/ <node name>", "chroot /host", "dmsetup ls| grep <pvc name>", "ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv <pv-name>", "oc debug node/worker-0", "Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.88.21 If you don't see a command prompt, try pressing enter. chroot /host", "ls -alh /mnt/local-storage/localblock", "total 0 drwxr-xr-x. 2 root root 17 Nov 18 15:23 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 .. lrwxrwxrwx. 1 root root 8 Nov 18 15:23 vdc -> /dev/vdc", "oc get -n openshift-local-storage localvolume", "NAME AGE localblock 25h", "oc edit -n openshift-local-storage localvolume localblock", "[...] storageClassDevices: - devicePaths: # - /dev/vdc storageClassName: localblock volumeMode: Block [...]", "oc debug node/worker-0", "Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.88.21 If you don't see a command prompt, try pressing enter. chroot /host", "ls -alh /mnt/local-storage/localblock", "total 0 drwxr-xr-x. 2 root root 17 Nov 18 15:23 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 .. lrwxrwxrwx. 1 root root 8 Nov 18 15:23 vdc -> /dev/vdc", "rm /mnt/local-storage/localblock/vdc", "ls -alh /mnt/local-storage/localblock", "total 0 drwxr-xr-x. 2 root root 6 Nov 18 17:11 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 ..", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 40G 0 disk |-vda1 252:1 0 4M 0 part |-vda2 252:2 0 384M 0 part /boot `-vda4 252:4 0 39.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 39.6G 0 dm /sysroot vdb 252:16 0 512B 1 disk vdd 252:32 0 256G 0 disk", "oc edit -n openshift-local-storage localvolume localblock", "[...] storageClassDevices: - devicePaths: # - /dev/vdc - /dev/vdd storageClassName: localblock volumeMode: Block [...]", "oc get pv | grep 256Gi", "local-pv-1e31f771 256Gi RWO Delete Bound openshift-storage/ocs-deviceset-localblock-2-data-0-6xhkf localblock 24h local-pv-ec7f2b80 256Gi RWO Delete Bound openshift-storage/ocs-deviceset-localblock-1-data-0-hr2fx localblock 24h local-pv-8137c873 256Gi RWO Delete Available localblock 32m", "oc get -n openshift-storage pod -l app=rook-ceph-operator", "NAME READY STATUS RESTARTS AGE rook-ceph-operator-85f6494db4-sg62v 1/1 Running 0 1d20h", "oc delete -n openshift-storage pod rook-ceph-operator-85f6494db4-sg62v", "pod \"rook-ceph-operator-85f6494db4-sg62v\" deleted", "oc get -n openshift-storage pod -l app=rook-ceph-operator", "NAME READY STATUS RESTARTS AGE rook-ceph-operator-85f6494db4-wx9xx 1/1 Running 0 50s", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-76d8fb97f9-mn8qz 1/1 Running 0 23m rook-ceph-osd-1-7c99657cfb-jdzvz 1/1 Running 1 25h rook-ceph-osd-2-5f9f6dfb5b-2mnw9 1/1 Running 0 25h", "oc get -n openshift-storage pvc | grep localblock", "ocs-deviceset-localblock-0-data-0-q4q6b Bound local-pv-8137c873 256Gi RWO localblock 10m ocs-deviceset-localblock-1-data-0-hr2fx Bound local-pv-ec7f2b80 256Gi RWO localblock 1d20h ocs-deviceset-localblock-2-data-0-6xhkf Bound local-pv-1e31f771 256Gi RWO localblock 1d20h", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node name>", "chroot /host", "lsblk", "lszdev", "TYPE ID zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no", "chzdev -d scsi-id", "chzdev -d 0.0.8204:0x500407630c0b50a4:0x3002b03000000000", "chzdev -e 0.0.8204:0x500507630b1b50a4:0x4001302a00000000", "lszdev zfcp-lun", "TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/replacing_devices/openshift_data_foundation_deployed_using_local_storage_devices
8.179. rhel-guest-image
8.179. rhel-guest-image 8.179.1. RHBA-2013:1735 - rhel-guest-image bug fix and enhancement update Updated rhel-guest-image packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The rhel-quest-image packages provide a Red Hat Enterprise Linux 6.5 KVM Guest Image for cloud instances. This image is provided as a minimally configured system image which is available for use as-is or for configuration and customization as required by end users. Bug Fixes BZ# 912475 , BZ# 952280 Prior to this update, the /etc/ssh/sshd_config file was not created properly due to a missing End of Line (EOL) return. Consequently, the sshd daemon start failed with an error message. The writing of the sequence to /etc/ssh/sshd_config has been corrected, and sshd start no longer fails in this scenario. BZ# 912801 Previously, the persistent udev rule was not removed from the guest image when the virtual machine (VM) was shut down. Consequently, when the VM was booted again and a different network card media access control (MAC) address was assigned to it, the udev rule caused the Network Interface Controller (NIC) to be set as eth1 instead of eth0. This incorrect configuration caused the instance to boot with no network support. This update adds the /etc/udev/rules.d/75-persistent-net-generator.rules file, and the VM configuration works as expected in the described scenario. BZ# 969487 The Red Hat Enterprise Linux guest image did not make use of the ttyS0 serial port by default, so tools that monitor the serial console log did not capture any information. This update adds a console log feature to the boot loader, and the Red Hat Enterprise Linux guest image now prints boot messages on the ttyS0 serial port and the standard console. BZ# 983611 Metadata access is handled by the network node, and the cloud-init service uses the metadata at startup time. Previously, the cloud-init service did not have the "NOZEROCONF=yes" stanza configured. Consequently, access to the subnet 169.254.0.0/16 range was not routed to the network node, so the cloud-init service failed to work. This update adds "NOZEROCONF=yes" to cloud images in the /etc/sysconfig/network file. Users should avoid turning on the zeroconf route in these images. To disable the zeroconf route at system boot time, edit the /etc/sysconfig/network file as root and add "NOZEROCONF=yes" to a new line at the end of the file. BZ# 1006883 Previously, the /etc/fstab file contained unnecessary entries. Consequently, file systems mounting did not work properly when updating the kernel and grub installation. With this update, the unnecessary entries have been removed and file systems mounting now works as expected in this scenario. BZ# 1011013 Previously, a dhclient configuration option was missing in the ifcfg-eth0 configuration file. As a consequence, the network connectivity was lost if Dynamic Host Configuration Protocol (DHCP) failed. This update adds the "PERSISTENT_DHCLIENT=1" configuration option to the ifcfg-eth0 configuration file. Now, if the Network Interface Controller (NIC) is unable to negotiate a DHCP address, it retries. Enhancement BZ# 974554 The virtual-guest image now includes a profile for the "tuned" daemon that is activated by default. This helps improve performance of the guest in most situations. Users of rhel-guest-image are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rhel-guest-image
Chapter 9. Authorization APIs
Chapter 9. Authorization APIs 9.1. Authorization APIs 9.1.1. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object 9.1.2. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object 9.1.3. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object 9.1.4. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object 9.2. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object Required spec 9.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 9.2.1.1. .spec Description SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description extra object Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. extra{} array (string) groups array (string) Groups is the groups you're testing for. nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface uid string UID information about the requesting user. user string User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups 9.2.1.2. .spec.extra Description Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. Type object 9.2.1.3. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 9.2.1.4. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 9.2.1.5. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 9.2.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 9.2.2.1. /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 9.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a LocalSubjectAccessReview Table 9.3. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 9.4. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty 9.3. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object Required spec 9.3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 9.3.1.1. .spec Description SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface 9.3.1.2. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 9.3.1.3. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 9.3.1.4. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 9.3.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectaccessreviews POST : create a SelfSubjectAccessReview 9.3.2.1. /apis/authorization.k8s.io/v1/selfsubjectaccessreviews Table 9.5. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SelfSubjectAccessReview Table 9.6. Body parameters Parameter Type Description body SelfSubjectAccessReview schema Table 9.7. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectAccessReview schema 201 - Created SelfSubjectAccessReview schema 202 - Accepted SelfSubjectAccessReview schema 401 - Unauthorized Empty 9.4. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object Required spec 9.4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. status object SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. 9.4.1.1. .spec Description SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. Type object Property Type Description namespace string Namespace to evaluate rules for. Required. 9.4.1.2. .status Description SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. Type object Required resourceRules nonResourceRules incomplete Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete. incomplete boolean Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation. nonResourceRules array NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. nonResourceRules[] object NonResourceRule holds information that describes a rule for the non-resource resourceRules array ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. resourceRules[] object ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. 9.4.1.3. .status.nonResourceRules Description NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 9.4.1.4. .status.nonResourceRules[] Description NonResourceRule holds information that describes a rule for the non-resource Type object Required verbs Property Type Description nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. s are allowed, but only as the full, final step in the path. " " means all. verbs array (string) Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options. "*" means all. 9.4.1.5. .status.resourceRules Description ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 9.4.1.6. .status.resourceRules[] Description ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "*" means all. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. "*" means all. resources array (string) Resources is a list of resources this rule applies to. " " means all in the specified apiGroups. " /foo" represents the subresource 'foo' for all resources in the specified apiGroups. verbs array (string) Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all. 9.4.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 9.4.2.1. /apis/authorization.k8s.io/v1/selfsubjectrulesreviews Table 9.8. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SelfSubjectRulesReview Table 9.9. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 9.10. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty 9.5. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object Required spec 9.5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 9.5.1.1. .spec Description SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description extra object Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. extra{} array (string) groups array (string) Groups is the groups you're testing for. nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface uid string UID information about the requesting user. user string User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups 9.5.1.2. .spec.extra Description Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. Type object 9.5.1.3. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 9.5.1.4. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 9.5.1.5. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 9.5.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/subjectaccessreviews POST : create a SubjectAccessReview 9.5.2.1. /apis/authorization.k8s.io/v1/subjectaccessreviews Table 9.11. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SubjectAccessReview Table 9.12. Body parameters Parameter Type Description body SubjectAccessReview schema Table 9.13. HTTP responses HTTP code Reponse body 200 - OK SubjectAccessReview schema 201 - Created SubjectAccessReview schema 202 - Accepted SubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/authorization-apis-1
Chapter 2. Admin REST API
Chapter 2. Admin REST API Red Hat Single Sign-On comes with a fully functional Admin REST API with all features provided by the Admin Console. To invoke the API you need to obtain an access token with the appropriate permissions. The required permissions are described in the Server Administration Guide. You can obtain a token by enabling authentication for your application using Red Hat Single Sign-On; see the Securing Applications and Services Guide. You can also use direct access grant to obtain an access token. 2.1. Examples of using CURL 2.1.1. Authenticating with a username and password Procedure Obtain an access token for the user in the realm master with username admin and password password : curl \ -d "client_id=admin-cli" \ -d "username=admin" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/auth/realms/master/protocol/openid-connect/token" Note By default this token expires in 1 minute The result will be a JSON document. Invoke the API you need by extracting the value of the access_token property. Invoke the API by including the value in the Authorization header of requests to the API. The following example shows how to get the details of the master realm: curl \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ "http://localhost:8080/auth/admin/realms/master" 2.1.2. Authenticating with a service account To authenticate against the Admin REST API using a client_id and a client_secret , perform this procedure. Procedure Make sure the client is configured as follows: client_id is a confidential client that belongs to the realm master client_id has Service Accounts Enabled option enabled client_id has a custom "Audience" mapper Included Client Audience: security-admin-console Check that client_id has the role 'admin' assigned in the "Service Account Roles" tab. curl \ -d "client_id=<YOUR_CLIENT_ID>" \ -d "client_secret=<YOUR_CLIENT_SECRET>" \ -d "grant_type=client_credentials" \ "http://localhost:8080/auth/realms/master/protocol/openid-connect/token" 2.2. Additional resources Server Administration Guide Securing Applications and Services Guide API Documentation
[ "curl -d \"client_id=admin-cli\" -d \"username=admin\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/auth/realms/master/protocol/openid-connect/token\"", "curl -H \"Authorization: bearer eyJhbGciOiJSUz...\" \"http://localhost:8080/auth/admin/realms/master\"", "curl -d \"client_id=<YOUR_CLIENT_ID>\" -d \"client_secret=<YOUR_CLIENT_SECRET>\" -d \"grant_type=client_credentials\" \"http://localhost:8080/auth/realms/master/protocol/openid-connect/token\"" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_developer_guide/admin_rest_api
9.6. Storage Devices
9.6. Storage Devices You can install Red Hat Enterprise Linux on a large variety of storage devices. This screen allows you to select either basic or specialized storage devices. Figure 9.4. Storage devices Basic Storage Devices Select Basic Storage Devices to install Red Hat Enterprise Linux on the following storage devices: hard drives or solid-state drives connected directly to the local system. Specialized Storage Devices Select Specialized Storage Devices to install Red Hat Enterprise Linux on the following storage devices: Storage area networks (SANs) Direct access storage devices (DASDs) Firmware RAID devices Multipath devices Use the Specialized Storage Devices option to configure Internet Small Computer System Interface (iSCSI) and FCoE (Fiber Channel over Ethernet) connections. If you select Basic Storage Devices , anaconda automatically detects the local storage attached to the system and does not require further input from you. Proceed to Section 9.7, "Setting the Hostname" . Note Monitoring of LVM and software RAID devices by the mdeventd daemon is not performed during installation. 9.6.1. The Storage Devices Selection Screen The storage devices selection screen displays all storage devices to which anaconda has access. Figure 9.5. Select storage devices - Basic devices Figure 9.6. Select storage devices - Multipath Devices Figure 9.7. Select storage devices - Other SAN Devices Devices are grouped under the following tabs: Basic Devices Basic storage devices directly connected to the local system, such as hard disk drives and solid-state drives. Firmware RAID Storage devices attached to a firmware RAID controller. Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. Important The installer only detects multipath storage devices with serial numbers that are 16 or 32 characters in length. Other SAN Devices Any other devices available on a storage area network (SAN). If you do need to configure iSCSI or FCoE storage, click Add Advanced Target and refer to Section 9.6.1.1, " Advanced Storage Options " . The storage devices selection screen also contains a Search tab that allows you to filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN) at which they are accessed. Figure 9.8. The Storage Devices Search Tab The tab contains a drop-down menu to select searching by port, target, WWID, or LUN (with corresponding text boxes for these values). Searching by WWID or LUN requires additional values in the corresponding text box. Each tab presents a list of devices detected by anaconda , with information about the device to help you to identify it. A small drop-down menu marked with an icon is located to the right of the column headings. This menu allows you to select the types of data presented on each device. For example, the menu on the Multipath Devices tab allows you to specify any of WWID , Capacity , Vendor , Interconnect , and Paths to include among the details presented for each device. Reducing or expanding the amount of information presented might help you to identify particular devices. Figure 9.9. Selecting Columns Each device is presented on a separate row, with a checkbox to its left. Click the checkbox to make a device available during the installation process, or click the radio button at the left of the column headings to select or deselect all the devices listed in a particular screen. Later in the installation process, you can choose to install Red Hat Enterprise Linux onto any of the devices selected here, and can choose to automatically mount any of the other devices selected here as part of the installed system. Note that the devices that you select here are not automatically erased by the installation process. Selecting a device on this screen does not, in itself, place data stored on the device at risk. Note also that any devices that you do not select here to form part of the installed system can be added to the system after installation by modifying the /etc/fstab file. Important Any storage devices that you do not select on this screen are hidden from anaconda entirely. To chain load the Red Hat Enterprise Linux boot loader from a different boot loader, select all the devices presented in this screen. when you have selected the storage devices to make available during installation, click and proceed to Section 9.11, "Initializing the Hard Disk" 9.6.1.1. Advanced Storage Options From this screen you can configure an iSCSI (SCSI over TCP/IP) target or FCoE (Fibre channel over ethernet) SAN (storage area network). Refer to Appendix B, iSCSI Disks for an introduction to iSCSI. Figure 9.10. Advanced Storage Options Select Add iSCSI target or Add FCoE SAN and click Add drive . If adding an iSCSI target, optionally check the box labeled Bind targets to network interfaces . 9.6.1.1.1. Select and configure a network interface The Advanced Storage Options screen lists the active network interfaces anaconda has found on your system. If none are found, anaconda must activate an interface through which to connect to the storage devices. Click Configure Network on the Advanced Storage Options screen to configure and activate one using NetworkManager to use during installation. Alternatively, anaconda will prompt you with the Select network interface dialog after you click Add drive . Figure 9.11. Select network interface Select an interface from the drop-down menu. Click OK . Anaconda then starts NetworkManager to allow you to configure the interface. Figure 9.12. Network Connections For details of how to use NetworkManager , refer to Section 9.7, "Setting the Hostname" 9.6.1.1.2. Configure iSCSI parameters To add an iSCSI target, select Add iSCSI target and click Add drive . To use iSCSI storage devices for the installation, anaconda must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a username and password for CHAP (Challenge Handshake Authentication Protocol) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached ( reverse CHAP ), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP . Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the username and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps as many times as necessary to add all required iSCSI storage. However, you cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. Procedure 9.1. iSCSI discovery Use the iSCSI Discovery Details dialog to provide anaconda with the information that it needs to discover the iSCSI target. Figure 9.13. The iSCSI Discovery Details dialog Enter the IP address of the iSCSI target in the Target IP Address field. Provide a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN contains: the string iqn. (note the period) a date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage a colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example, :diskarrays-sn-a8675309 . A complete IQN therefore resembles: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 , and anaconda pre-populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information on IQNs, refer to 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from http://tools.ietf.org/html/rfc3720#section-3.2.6 and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from http://tools.ietf.org/html/rfc3721#section-1 . Use the drop-down menu to specify the type of authentication to use for iSCSI discovery: Figure 9.14. iSCSI discovery authentication no credentials CHAP pair CHAP pair and a reverse pair If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 9.15. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password field and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 9.16. CHAP pair and a reverse pair Click Start Discovery . Anaconda attempts to discover an iSCSI target based on the information that you provided. If discovery succeeds, the iSCSI Discovered Nodes dialog presents you with a list of all the iSCSI nodes discovered on the target. Each node is presented with a checkbox beside it. Click the checkboxes to select the nodes to use for installation. Figure 9.17. The iSCSI Discovered Nodes dialog Click Login to initiate an iSCSI session. Procedure 9.2. Starting an iSCSI session Use the iSCSI Nodes Login dialog to provide anaconda with the information that it needs to log into the nodes on the iSCSI target and start an iSCSI session. Figure 9.18. The iSCSI Nodes Login dialog Use the drop-down menu to specify the type of authentication to use for the iSCSI session: Figure 9.19. iSCSI session authentication no credentials CHAP pair CHAP pair and a reverse pair Use the credentials from the discovery step If your environment uses the same type of authentication and same username and password for iSCSI discovery and for the iSCSI session, select Use the credentials from the discovery step to reuse these credentials. If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 9.20. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 9.21. CHAP pair and a reverse pair Click Login . Anaconda attempts to log into the nodes on the iSCSI target based on the information that you provided. The iSCSI Login Results dialog presents you with the results. Figure 9.22. The iSCSI Login Results dialog Click OK to continue. 9.6.1.1.3. Configure FCoE Parameters To configure an FCoE SAN, select Add FCoE SAN and click Add Drive . In the dialog box that appears after you click Add drive , select the network interface that is connected to your FCoE switch and click Add FCoE Disk(s) . Figure 9.23. Configure FCoE Parameters Data Center Bridging (DCB) is a set of enhancements to the Ethernet protocols designed to increase the efficiency of Ethernet connections in storage networks and clusters. Enable or disable the installer's awareness of DCB with the checkbox in this dialog. This should only be set for networking interfaces that require a host-based DCBX client. Configurations on interfaces that implement a hardware DCBX client should leave this checkbox empty. Auto VLAN indicates whether VLAN discovery should be performed. If this box is checked, then the FIP VLAN discovery protocol will run on the Ethernet interface once the link configuration has been validated. If they are not already configured, network interfaces for any discovered FCoE VLANs will be automatically created and FCoE instances will be created on the VLAN interfaces.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Storage_Devices-x86
2.4. Cluster Considerations
2.4. Cluster Considerations When determining the number of nodes that your system will contain, note that there is a trade-off between high availability and performance. With a larger number of nodes, it becomes increasingly difficult to make workloads scale. For that reason, Red Hat does not support using GFS2 for cluster file system deployments greater than 16 nodes. Deploying a cluster file system is not a "drop in" replacement for a single node deployment. Red Hat recommends that you allow a period of around 8-12 weeks of testing on new installations in order to test the system and ensure that it is working at the required performance level. During this period any performance or functional issues can be worked out and any queries should be directed to the Red Hat support team. Red Hat recommends that customers considering deploying clusters have their configurations reviewed by Red Hat support before deployment to avoid any possible support issues later on.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-deploy-gfs2
Chapter 197. Kubernetes ConfigMap Component
Chapter 197. Kubernetes ConfigMap Component Available as of Camel version 2.17 The Kubernetes ConfigMap component is one of Kubernetes Components which provides a producer to execute kubernetes ConfigMap operations. 197.1. Component Options The Kubernetes ConfigMap component has no options. 197.2. Endpoint Options The Kubernetes ConfigMap endpoint is configured using URI syntax: with the following path and query parameters: 197.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 197.2.2. Query Parameters (20 parameters): Name Description Default Type apiVersion (producer) The Kubernetes API Version to use String dnsDomain (producer) The dns domain, used for ServiceCall EIP String kubernetesClient (producer) Default KubernetesClient to use if provided KubernetesClient operation (producer) Producer operation to do on Kubernetes String portName (producer) The port name, used for ServiceCall EIP String portProtocol (producer) The port protocol, used for ServiceCall EIP tcp String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String 197.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "kubernetes-config-maps:masterUrl" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes-config-maps-component
Chapter 11. Restricting domains for PAM services using SSSD
Chapter 11. Restricting domains for PAM services using SSSD Pluggable authentication modules (PAMs) are a common framework for authentication and authorization. Most system applications in Red Hat Enterprise Linux depend on underlying PAM configuration for authentication and authorization. System Security Services Daemon (SSSD) enables you to restrict which domains PAM services can access. SSSD evaluates authentication requests from PAM services based on the user that runs the particular PAM service. This means, if the PAM service user can access an SSSD domain then the PAM service also can access that domain. 11.1. About PAM Pluggable Authentication Modules (PAMs) provide a centralized authentication mechanism, which a system application can use to relay authentication to a centrally configured framework. PAM is pluggable because a PAM module exists for different types of authentication sources, such as Kerberos, SSSD, NIS, or the local file system. You can prioritize different authentication sources. This modular architecture offers administrators a great deal of flexibility in setting authentication policies for the system. PAM is a useful system for developers and administrators for several reasons: PAM provides a common authentication scheme, which can be used with a wide variety of applications. PAM provides significant flexibility and control over authentication for system administrators. PAM provides a single, fully-documented library, which allows developers to write programs without having to create their own authentication schemes. 11.2. Domain-access restriction options The following options are available to restrict access to selected domains: pam_trusted_users in /etc/sssd/sssd.conf This option accepts a list of numerical UIDs or user names representing the PAM services that SSSD trusts. The default setting is all , which means all service users are trusted and can access any domain. pam_public_domains in /etc/sssd/sssd.conf This option accepts a list of public SSSD domains. Public domains are domains accessible even for untrusted PAM service users. The option also accepts the all and none values. The default value is none , which means no domains are public and untrusted service users cannot access any domain. domains for PAM configuration files This option specifies a list of domains against which a PAM service can authenticate. If you use domains without specifying any domain, the PAM service will not be able to authenticate against any domain, for example: If the PAM configuration file uses domains , the PAM service is able to authenticate against all domains when that service is running under a trusted user. The domains option in the /etc/sssd/sssd.conf SSSD configuration file also specifies a list of domains to which SSSD attempts to authenticate. Note that the domains option in a PAM configuration file cannot extend the list of domains in sssd.conf , it can only restrict the sssd.conf list of domains by specifying a shorter list. Therefore, if a domain is specified in the PAM file but not in sssd.conf , the PAM service cannot authenticate against the domain. The default settings pam_trusted_users = all and pam_public_domains = none specify that all PAM service users are trusted and can access any domain. Using the domains option for PAM configuration files restricts the access to the domains. Specifying a domain using domains in the PAM configuration file while sssd.conf contains pam_public_domains also requires to specify the domain in pam_public_domains . The pam_public_domains option without including the required domain leads the PAM service to unsuccessful authentication against the domain in case this service is running under an untrusted user. Note Domain restrictions defined in a PAM configuration file apply to authentication actions only, not to user lookups. Additional resources For more details on the pam_trusted_users and pam_public_domains options, see the sssd.conf(5) man page on your system. For more details on the domains option used in PAM configuration files, see the pam_sss(8) man page on your system. 11.3. Restricting domains for a PAM service This procedure shows how to restrict a PAM service authentication against the domains. Prerequisites SSSD installed and running. Procedure Configure SSSD to access the required domain or domains. Define the domains against which SSSD can authenticate in the domains option in the /etc/sssd/sssd.conf file: Specify the domain or domains to which a PAM service can authenticate by setting the domains option in the PAM configuration file. For example: In this example, you allow the PAM service to authenticate against domain1 only. Verification Authenticate against domain1 . It must be successful. 11.4. About PAM configuration files PAM configuration files specify the authentication methods and policies for services in Red Hat Enterprise Linux. Each PAM aware application or service has a corresponding file in the /etc/pam.d/ directory. The file is named after the service it controls access to. For example, the login program has a corresponding PAM configuration file named /etc/pam.d/login . Warning Manual editing of PAM configuration files might lead to authentication and access issues. Configure PAMs using the authselect tool. PAM configuration file format Each PAM configuration file consists of directives that define settings for a specific module. PAM uses arguments to pass information to a pluggable module during authentication for some modules. For example: PAM module types A PAM module type specifies the type of authentication task that a module performs. The module can perform the following tasks: account management authentication management password management session management An individual module can provide any or all module types. For example, pam_unix.so provides all four module types. The module name, such as pam_unix.so , provides PAM with the name of the library containing the specified module type. The directory name is omitted because the application is linked to the appropriate version of libpam , which can locate the correct version of the module. Module type directives can be stacked, or placed upon one another, so that multiple modules are used together for one purpose. The order of the modules is important, along with the control flags, it determines how significant the success or failure of a particular module is to the overall goal of authenticating the user to the service. You can stack PAM modules to enforce specific conditions that must be met before a user is allowed to authenticate. PAM control flags When a PAM module performs its function, it returns a success or failure result. Control flags instruct PAM on how to handle this result. Simple flags use a keyword, more complex syntax follows [value1=action1 value2=action2 ... ] format. Note When a module's control flag uses the sufficient or requisite value, the order in which the modules are listed is important to the authentication process. For a detailed description of PAM control flags, including a list of options, see the pam.conf(5) man page. Additional resources pam.conf(5) man page pam(8) man page 11.5. PAM configuration example See an example of PAM configuration file with the detailed description. Annotated PAM configuration example 1 This line ensures that the root login is allowed only from a terminal which is listed in the /etc/securetty file, if this file exists. If the terminal is not listed in the file, the login as root fails with a Login incorrect message. 2 Prompts the user for a password and checks it against the information stored in /etc/passwd and, if it exists, /etc/shadow . The nullok argument allows a blank password. 3 Checks if /etc/nologin file exists. If it exists, and the user is not root, authentication fails. Note In this example, all three auth modules are checked, even if the first auth module fails. This is a good security approach that prevents potential attacker from knowing at what stage their authentication failed. 4 Verifies the user's account. For example, if shadow passwords have been enabled, the account type of the pam_unix.so module checks for expired accounts or if the user needs to change the password. 5 Prompts for a new password if the current one has expired or when the user manually requests a password change. Then it checks the strength of the newly created password to ensure it meets quality requirements and is not easily determined by a dictionary-based password cracking program. The argument retry=3 specifies that user has three attempts to create a strong password. 6 The pam_unix.so module manages password changes. The shadow argument instructs the module to create shadow passwords when updating a user's password. The nullok argument allows the user to change their password from a blank password, otherwise a null password is treated as an account lock. The use_authtok argument accepts any password that was entered previously without prompting the user again. In this way, all new passwords must pass the pam_pwquality.so check for secure passwords before being accepted. 7 The pam_unix.so module manages the session and logs the user name and service type at the beginning and end of each session. Logs are collected by the systemd-journald service and can be viewed by using the journalctl command. Logs are also stored in /var/log/secure . This module can be supplemented by stacking it with other session modules for additional functionality.
[ "auth required pam_sss.so domains=", "[sssd] domains = domain1, domain2, domain3", "auth sufficient pam_sss.so forward_pass domains=domain1 account [default=bad success=ok user_unknown=ignore] pam_sss.so password sufficient pam_sss.so use_authtok", "module_type control_flag module_name module_arguments", "auth required pam_unix.so", "#%PAM-1.0 auth required pam_securetty.so 1 auth required pam_unix.so nullok 2 auth required pam_nologin.so 3 account required pam_unix.so 4 password required pam_pwquality.so retry=3 5 password required pam_unix.so shadow nullok use_authtok 6 session required pam_unix.so 7" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/restricting-domains-for-pam-services-using-sssd_configuring-authentication-and-authorization-in-rhel
Chapter 5. Mavenizing your application
Chapter 5. Mavenizing your application MTR provides the ability to generate an Apache Maven project structure based on the application provided. This will create a directory structure with the necessary Maven Project Object Model (POM) files that specify the appropriate dependencies. Note that this feature is not intended to create a final solution for your project. It is meant to give you a starting point and identify the necessary dependencies and APIs for your application. Your project may require further customization. 5.1. Generating the Maven project structure You can generate a Maven project structure for the provided application by passing in the --mavenize flag when executing MTR. The following example runs MTR using the jee-example-app-1.0.0.ear test application: This generates the Maven project structure in the /path/to/output/mavenized directory. Note You can only use the --mavenize option when providing a compiled application for the --input argument. This feature is not available when running MTR against source code. You can also use the --mavenizeGroupId option to specify the <groupId> to be used for the POM files. If unspecified, MTR will attempt to identify an appropriate <groupId> for the application, or will default to com.mycompany.mavenized . 5.2. Reviewing the Maven project structure The /path/to/output/mavenized/<APPLICATION_NAME>/ directory contains the following items: A root POM file. This is the pom.xml file at the top-level directory. A BOM file. This is the POM file in the directory ending with -bom . One or more application POM files. Each module has its POM file in a directory named after the archive. The example jee-example-app-1.0.0.ear application is an EAR archive that contains a WAR and several JARs. There is a separate directory created for each of these artifacts. Below is the Maven project structure created for this application. /path/to/output/mavenized/jee-example-app/ jee-example-app-bom/pom.xml jee-example-app-ear/pom.xml jee-example-services2-jar/pom.xml jee-example-services-jar/pom.xml jee-example-web-war/pom.xml pom.xml Review each of the generated files and customize as appropriate for your project. To learn more about Maven POM files, see the Introduction to the POM section of the Apache Maven documentation. Root POM file The root POM file for the jee-example-app-1.0.0.ear application can be found at /path/to/output/mavenized/jee-example-app/pom.xml . This file identifies the directories for all of the project modules. The following modules are listed in the root POM for the example jee-example-app-1.0.0.ear application. <modules> <module>jee-example-app-bom</module> <module>jee-example-services2-jar</module> <module>jee-example-services-jar</module> <module>jee-example-web-war</module> <module>jee-example-app-ear</module> </modules> Note Be sure to reorder the list of modules if necessary so that they are listed in an appropriate build order for your project. The root POM is also configured to use the Red Hat JBoss Enterprise Application Platform Maven repository to download project dependencies. BOM file The Bill of Materials (BOM) file is generated in the directory ending in -bom . For the example jee-example-app-1.0.0.ear application, the BOM file can be found at /path/to/output/mavenized/jee-example-app/jee-example-app-bom/pom.xml . The purpose of this BOM is to have the versions of third-party dependencies used by the project defined in one place. For more information on using a BOM, see the Introduction to the dependency mechanism section of the Apache Maven documentation. The following dependencies are listed in the BOM for the example jee-example-app-1.0.0.ear application <dependencyManagement> <dependencies> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.6</version> </dependency> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.5</version> </dependency> </dependencies> </dependencyManagement> Application POM files Each application module that can be mavenized has a separate directory containing its POM file. The directory name contains the name of the archive and ends in a -jar , -war , or -ear suffix, depending on the archive type. Each application POM file lists that module's dependencies, including: Third-party libraries Java EE APIs Application submodules For example, the POM file for the jee-example-app-1.0.0.ear EAR, /path/to/output/mavenized/jee-example-app/jee-example-app-ear/pom.xml , lists the following dependencies. <dependencies> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.6</version> </dependency> <dependency> <groupId>org.jboss.seam</groupId> <artifactId>jee-example-web-war</artifactId> <version>1.0</version> <type>war</type> </dependency> <dependency> <groupId>org.jboss.seam</groupId> <artifactId>jee-example-services-jar</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.jboss.seam</groupId> <artifactId>jee-example-services2-jar</artifactId> <version>1.0</version> </dependency> </dependencies>
[ "<MTR_HOME>/mta-cli --input /path/to/jee-example-app-1.0.0.ear --output /path/to/output --target eap:6 --packages com.acme org.apache --mavenize", "/path/to/output/mavenized/jee-example-app/ jee-example-app-bom/pom.xml jee-example-app-ear/pom.xml jee-example-services2-jar/pom.xml jee-example-services-jar/pom.xml jee-example-web-war/pom.xml pom.xml", "<modules> <module>jee-example-app-bom</module> <module>jee-example-services2-jar</module> <module>jee-example-services-jar</module> <module>jee-example-web-war</module> <module>jee-example-app-ear</module> </modules>", "<dependencyManagement> <dependencies> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.6</version> </dependency> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.5</version> </dependency> </dependencies> </dependencyManagement>", "<dependencies> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.6</version> </dependency> <dependency> <groupId>org.jboss.seam</groupId> <artifactId>jee-example-web-war</artifactId> <version>1.0</version> <type>war</type> </dependency> <dependency> <groupId>org.jboss.seam</groupId> <artifactId>jee-example-services-jar</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.jboss.seam</groupId> <artifactId>jee-example-services2-jar</artifactId> <version>1.0</version> </dependency> </dependencies>" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/cli_guide/mavenize_cli-guide
Monitoring
Monitoring OpenShift Container Platform 4.14 Configuring and using the monitoring stack in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/monitoring/index
Chapter 12. Database cleaning
Chapter 12. Database cleaning Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. The Compute service includes an administrative tool, nova-manage , that you can use to perform deployment, upgrade, clean-up, and maintenance-related tasks, such as applying database schemas, performing online data migrations during an upgrade, and managing and cleaning up the database. The following database management tasks are performed by default: Archives deleted instance records by moving the deleted rows from the production tables to shadow tables. Purges deleted rows from the shadow tables after archiving is complete.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_database-cleaning_database-cleaning
4.113. kdeaccessibility
4.113. kdeaccessibility 4.113.1. RHBA-2011:1173 - kdeaccessibility bug fix update Updated kdeaccessibility packages that fix one bug are now available for Red Hat Enterprise Linux 6. KDE is a graphical desktop environment for the X Window System. Kdeaccessibility contains KDE accessibility utilities, including KMouseTool (to initiate mouse clicks), KMag (to magnify parts of the screen), and KMouth & KTTS (a text-to-speech utility). Bug Fix BZ# 587897 Prior to this update, the icon for the kttsmsg program incorrectly appeared in GNOME Application's Accessories menu. This bug has been fixed in this update so that the icon is now no longer displayed, as expected. All users of kdeaccessibility are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/kdeaccessibility
Chapter 1. About hardware accelerators
Chapter 1. About hardware accelerators Specialized hardware accelerators play a key role in the emerging generative artificial intelligence and machine learning (AI/ML) industry. Specifically, hardware accelerators are essential to the training and serving of large language and other foundational models that power this new technology. Data scientists, data engineers, ML engineers, and developers can take advantage of the specialized hardware acceleration for data-intensive transformations and model development and serving. Much of that ecosystem is open source, with a number of contributing partners and open source foundations. Red Hat OpenShift Container Platform provides support for cards and peripheral hardware that add processing units that comprise hardware accelerators: Graphical processing units (GPUs) Neural processing units (NPUs) Application-specific integrated circuits (ASICs) Data processing units (DPUs) Specialized hardware accelerators provide a rich set of benefits for AI/ML development: One platform for all A collaborative environment for developers, data engineers, data scientists, and DevOps Extended capabilities with Operators Operators allow for bringing AI/ML capabilities to OpenShift Container Platform Hybrid-cloud support On-premise support for model development, delivery, and deployment Support for AI/ML workloads Model testing, iteration, integration, promotion, and serving into production as services Red Hat provides an optimized platform to enable these specialized hardware accelerators in Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform platforms at the Linux (kernel and userspace) and Kubernetes layers. To do this, Red Hat combines the proven capabilities of Red Hat OpenShift AI and Red Hat OpenShift Container Platform in a single enterprise-ready AI application platform. Hardware Operators use the operating framework of a Kubernetes cluster to enable the required accelerator resources. You can also deploy the provided device plugin manually or as a daemon set. This plugin registers the GPU in the cluster. Certain specialized hardware accelerators are designed to work within disconnected environments where a secure environment must be maintained for development and testing. 1.1. Hardware accelerators Red Hat OpenShift Container Platform enables the following hardware accelerators: NVIDIA GPU AMD Instinct(R) GPU Intel(R) Gaudi(R) Additional resources Introduction to Red Hat OpenShift AI NVIDIA GPU Operator on Red Hat OpenShift Container Platform AMD Instinct Accelerators Intel Gaudi Al Accelerators
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/hardware_accelerators/about-hardware-accelerators
Chapter 12. Live migration
Chapter 12. Live migration 12.1. Virtual machine live migration 12.1.1. About live migration Live migration is the process of moving a running virtual machine instance (VMI) to another node in the cluster without interrupting the virtual workload or access. If a VMI uses the LiveMigrate eviction strategy, it automatically migrates when the node that the VMI runs on is placed into maintenance mode. You can also manually start live migration by selecting a VMI to migrate. You can use live migration if the following conditions are met: Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 12.1.2. Additional resources Migrating a virtual machine instance to another node Live migration metrics Live migration limiting Customizing the storage profile VM migration tuning 12.2. Live migration limits and timeouts Apply live migration limits and timeouts so that migration processes do not overwhelm the cluster. Configure these settings by editing the HyperConverged custom resource (CR). 12.2.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters. USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 1 In this example, the spec.liveMigrationConfig array contains the default values for each field. Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 12.2.2. Cluster-wide live migration limits and timeouts Table 12.1. Migration parameters Parameter Description Default parallelMigrationsPerCluster Number of migrations running in parallel in the cluster. 5 parallelOutboundMigrationsPerNode Maximum number of outbound migrations per node. 2 bandwidthPerMigration Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of 2048Mi means 2048 MiB/s. 0 [1] completionTimeoutPerGiB The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a virtual machine instance with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 800 progressTimeout The migration is canceled if memory copy fails to make progress in this time, in seconds. 150 The default value of 0 is unlimited. 12.3. Migrating a virtual machine instance to another node Manually initiate a live migration of a virtual machine instance to another node using either the web console or the CLI. Note If a virtual machine uses a host model CPU, you can perform live migration of that virtual machine only between nodes that support its host CPU model. 12.3.1. Initiating live migration of a virtual machine instance in the web console Migrate a running virtual machine instance to a different node in the cluster. Note The Migrate action is visible to all users but only admin users can initiate a virtual machine migration. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. You can initiate the migration from this page, which makes it easier to perform actions on multiple virtual machines on the same page, or from the VirtualMachine details page where you can view comprehensive details of the selected virtual machine: Click the Options menu to the virtual machine and select Migrate . Click the virtual machine name to open the VirtualMachine details page and click Actions Migrate . Click Migrate to migrate the virtual machine to another node. 12.3.1.1. Monitoring live migration by using the web console You can monitor the progress of all live migrations on the Overview Migrations tab in the web console. You can view the migration metrics of a virtual machine on the VirtualMachine details Metrics tab in the web console. 12.3.2. Initiating live migration of a virtual machine instance in the CLI Initiate a live migration of a running virtual machine instance by creating a VirtualMachineInstanceMigration object in the cluster and referencing the name of the virtual machine instance. Procedure Create a VirtualMachineInstanceMigration configuration file for the virtual machine instance to migrate. For example, vmi-migrate.yaml : apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora Create the object in the cluster by running the following command: USD oc create -f vmi-migrate.yaml The VirtualMachineInstanceMigration object triggers a live migration of the virtual machine instance. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. 12.3.2.1. Monitoring live migration of a virtual machine instance in the CLI The status of the virtual machine migration is stored in the Status component of the VirtualMachineInstance configuration. Procedure Use the oc describe command on the migrating virtual machine instance: USD oc describe vmi vmi-fedora Example output # ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true 12.3.3. Additional resources Cancelling the live migration of a virtual machine instance 12.4. Migrating a virtual machine over a dedicated additional network You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 12.4.1. Configuring a dedicated secondary network for virtual machine live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition for the openshift-cnv namespace by using the CLI. Then, add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. The Multus Container Network Interface (CNI) plugin is installed on the cluster. Every node on the cluster has at least two Network Interface Cards (NICs), and the NICs to be used for live migration are connected to the same VLAN. The virtual machine (VM) is running with the LiveMigrate eviction strategy. Procedure Create a NetworkAttachmentDefinition manifest. Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 3 "mode": "bridge", "ipam": { "type": "whereabouts", 4 "range": "10.200.5.0/24" 5 } }' 1 The name of the NetworkAttachmentDefinition object. 2 The namespace where the NetworkAttachmentDefinition object resides. This must be openshift-cnv . 3 The name of the NIC to be used for live migration. 4 The name of the CNI plugin that provides the network for this network attachment definition. 5 The IP address range for the secondary network. This range must not have any overlap with the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR. For example: Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ... 1 The name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 12.4.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the OpenShift Container Platform web console. Prerequisites You configured a Multus network for live migration. Procedure Navigate to Virtualization > Overview in the OpenShift Container Platform web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 12.4.3. Additional resources Live migration limits and timeouts 12.5. Cancelling the live migration of a virtual machine instance Cancel the live migration so that the virtual machine instance remains on the original node. You can cancel a live migration from either the web console or the CLI. 12.5.1. Cancelling live migration of a virtual machine instance in the web console You can cancel the live migration of a virtual machine instance in the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Cancel Migration . 12.5.2. Cancelling live migration of a virtual machine instance in the CLI Cancel the live migration of a virtual machine instance by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job 12.6. Configuring virtual machine eviction strategy The LiveMigrate eviction strategy ensures that a virtual machine instance is not interrupted if the node is placed into maintenance or drained. Virtual machines instances with this eviction strategy will be live migrated to another node. 12.6.1. Configuring custom virtual machines with the LiveMigration eviction strategy You only need to configure the LiveMigration eviction strategy on custom virtual machines. Common templates have this eviction strategy configured by default. Procedure Add the evictionStrategy: LiveMigrate option to the spec.template.spec section in the virtual machine configuration file. This example uses oc edit to update the relevant snippet of the VirtualMachine configuration file: USD oc edit vm <custom-vm> -n <my-namespace> apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate # ... Restart the virtual machine for the update to take effect: USD virtctl restart <custom-vm> -n <my-namespace> 12.7. Configuring live migration policies You can define different migration configurations for specified groups of virtual machine instances (VMIs) by using a live migration policy. Important Live migration policy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure a live migration policy by using the web console, see the MigrationPolicies page documentation . 12.7.1. Configuring a live migration policy from the command line Use the MigrationPolicy custom resource definition (CRD) to define migration policies for one or more groups of selected virtual machine instances (VMIs). You can specify groups of VMIs by using any combination of the following: Virtual machine instance labels such as size , os , gpu , and other VMI labels. Namespace labels such as priority , bandwidth , hpc-workload , and other namespace labels. For the policy to apply to a specific group of VMIs, all labels on the group of VMIs must match the labels in the policy. Note If multiple live migration policies apply to a VMI, the policy with the highest number of matching labels takes precedence. If multiple policies meet this criteria, the policies are sorted by lexicographic order of the matching labels keys, and the first one in that order takes precedence. Procedure Create a MigrationPolicy CRD for your specified group of VMIs. The following example YAML configures a group with the labels hpc-workloads:true , xyz-workloads-type: "" , workload-type: db , and operating-system: "" : apiVersion: migrations.kubevirt.io/v1beta1 kind: MigrationPolicy metadata: name: my-awesome-policy spec: # Migration Configuration allowAutoConverge: true bandwidthPerMigration: 217Ki completionTimeoutPerGiB: 23 allowPostCopy: false # Matching to VMIs selectors: namespaceSelector: 1 hpc-workloads: "True" xyz-workloads-type: "" virtualMachineInstanceSelector: 2 workload-type: "db" operating-system: "" 1 Use namespaceSelector to define a group of VMIs by using namespace labels. 2 Use virtualMachineInstanceSelector to define a group of VMIs by using VMI labels.
[ "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora", "oc create -f vmi-migrate.yaml", "oc describe vmi vmi-fedora", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 3 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 4 \"range\": \"10.200.5.0/24\" 5 } }'", "edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "oc delete vmim migration-job", "oc edit vm <custom-vm> -n <my-namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate", "virtctl restart <custom-vm> -n <my-namespace>", "apiVersion: migrations.kubevirt.io/v1beta1 kind: MigrationPolicy metadata: name: my-awesome-policy spec: # Migration Configuration allowAutoConverge: true bandwidthPerMigration: 217Ki completionTimeoutPerGiB: 23 allowPostCopy: false # Matching to VMIs selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 workload-type: \"db\" operating-system: \"\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/virtualization/live-migration
27.2. Disabling Console Program Access
27.2. Disabling Console Program Access To disable access by users to console programs, run the following command as root: In environments where the console is otherwise secured (BIOS and boot loader passwords are set, Ctrl + Alt + Delete is disabled, the power and reset switches are disabled, and so forth), you may not want to allow any user at the console to run poweroff , halt , and reboot , which are accessible from the console by default. To remove these abilities, run the following commands as root:
[ "rm -f /etc/security/console.apps/*", "rm -f /etc/security/console.apps/poweroff rm -f /etc/security/console.apps/halt rm -f /etc/security/console.apps/reboot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Console_Access-Disabling_Console_Program_Access
B.3. The guest virtual machine cannot be started: internal error guest CPU is not compatible with host CPU
B.3. The guest virtual machine cannot be started: internal error guest CPU is not compatible with host CPU Symptom Running on an Intel Core i7 processor (which virt-manager refers to as Nehalem , or the older Core 2 Duo, referred to as Penryn ), a KVM guest (or domain) is created using virt-manager . After installation, the guest's processor is changed to match the host's CPU. The guest is then unable to start and reports this error: Additionally, clicking Copy host CPU configuration in virt-manager shows Pentium III instead of Nehalem or Penryn . Investigation The /usr/share/libvirt/cpu_map.xml file lists the flags that define each CPU model. The Nehalem and Penryn definitions contain this: <feature name='nx'/> As a result, the NX (or No eXecute ) flag needs to be presented to identify the CPU as Nehalem or Penryn . However, in /proc/cpuinfo , this flag is missing. Solution Nearly all new BIOSes allow enabling or disabling of the No eXecute bit. However, if disabled, some CPUs do not report this flag and thus libvirt detects a different CPU. Enabling this functionality instructs libvirt to report the correct CPU. Refer to your hardware documentation for further instructions on this subject.
[ "2012-02-06 17:49:15.985+0000: 20757: error : qemuBuildCpuArgStr:3565 : internal error guest CPU is not compatible with host CPU", "<feature name='nx'/>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/App_Domain_Processor
Scaling deployments with Compute cells
Scaling deployments with Compute cells Red Hat OpenStack Platform 17.1 A guide to creating and configuring a multi-cell overcloud OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/scaling_deployments_with_compute_cells/index
Chapter 9. Upgrading the undercloud operating system
Chapter 9. Upgrading the undercloud operating system You must upgrade the undercloud operating system from Red Hat Enterprise Linux 8.4 to Red Hat Enterprise Linux 9.2. The system upgrade performs the following tasks: Ensures that network interface naming remains consistent after the system upgrade Uses Leapp to upgrade RHEL in-place Reboots the undercloud 9.1. Setting the SSH root permission parameter on the undercloud The Leapp upgrade checks whether the PermitRootLogin parameter exists in the /etc/ssh/sshd_config file. You must explicitly set this parameter to either yes or no . For security purposes, set this parameter to no to disable SSH access to the root user on the undercloud. Procedure Log in to the undercloud as the stack user. Check the /etc/ssh/sshd_config file for the PermitRootLogin parameter: If the parameter is not in the /etc/ssh/sshd_config file, edit the file and set the PermitRootLogin parameter: Save the file. 9.2. Validating your SSH key size Starting with Red Hat Enterprise Linux (RHEL) 9.1, a minimum SSH key size of 2048 bits is required. If your current SSH key on Red Hat OpenStack Platform (RHOSP) director is less than 2048 bits, you can lose access to the overcloud. You must verify that your SSH key meets the required bit size. Procedure Validate your SSH key size: Example output: If your SSH key is less than 2048 bits, you must rotate out the SSH key before continuing. For more information, see Updating SSH keys in your OpenStack environment in Hardening Red Hat OpenStack Platform . 9.3. Performing the undercloud system upgrade Upgrade your undercloud operating system to Red Hat Enterprise Linux (RHEL) 9.2. As part of this upgrade, you create a file named system_upgrade.yaml , which you use to enable the appropriate repositories and required Red Hat OpenStack Platform options and content to install Leapp. You use this file to also upgrade your control plane nodes and Compute nodes. For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact . Procedure Log in to the undercloud as the stack user. Create a file named system_upgrade.yaml in your templates directory and include the following content: Note If your deployment includes Red Hat Ceph Storage nodes, you must add the CephLeappRepoInitCommand parameter and specify the source OS version of your Red Hat Ceph Storage nodes. For example: Add the LeappInitCommand parameter to your system_upgrade.yaml file to specify additional requirements applicable to your environment, for example, if you need to define role-based overrides: Important Removing the ruby-irb package is mandatory to avoid a conflict between the RHEL 8 ruby-irb directory and the RHEL 9 symlink. For more information, see the Red Hat Knowledgebase solution leapp upgrade RHEL8 to RHEL9 fails with error "rubygem-irb-1.3.5-160.el9_0.noarch conflicts with file from package ruby-irb-2.5.9-110.module+el8.6.0+15956+aa803fc1.noarch" . Note If your environment previously ran RHOSP 13.0 or earlier, during the system upgrade, a known issue causes GRUB to contain RHEL 7 entries instead of RHEL 8 entries. For more information, including a workaround, see the Red Hat Knowledgebase solution Openstack 16 to 17 FFU - During LEAPP upgrade UEFI systems do not boot due to invalid /boot/grub2/grub.cfg . If you are upgrading Red Hat Ceph Storage nodes that originated in RHOSP 16.1 and earlier, add the CephStorageLeappInitCommand parameter to remove fencing agents: If you use kernel-based NIC names, add the following parameter to the system_upgrade.yaml file to ensure that the NIC names persist throughout the upgrade process: Run the Leapp upgrade: Note If you need to run the Leapp upgrade again, you must first reset the repositories to RHEL 8. Reboot the undercloud:
[ "sudo grep PermitRootLogin /etc/ssh/sshd_config", "PermitRootLogin no", "ssh-keygen -l -f /home/stack/overcloud-deploy/overcloud/ssh_private_key", "1024 SHA256:Xqz0Xz0/aJua6B3qRD7VsLr6n/V3zhmnGSkcFR6FlJw [email protected] (RSA)", "parameter_defaults: UpgradeLeappDevelSkip: \"LEAPP_UNSUPPORTED=1 LEAPP_DEVEL_SKIP_CHECK_OS_RELEASE=1 LEAPP_NO_NETWORK_RENAMING=1 LEAPP_DEVEL_TARGET_RELEASE=9.2\" UpgradeLeappDebug: false UpgradeLeappEnabled: true LeappActorsToRemove: ['checkifcfg','persistentnetnamesdisable','checkinstalledkernels','biosdevname'] LeappRepoInitCommand: | subscription-manager repos --disable=* subscription-manager repos --enable rhel-8-for-x86_64-baseos-tus-rpms --enable rhel-8-for-x86_64-appstream-tus-rpms --enable openstack-17.1-for-rhel-8-x86_64-rpms subscription-manager release --set=8.4 UpgradeLeappCommandOptions: \"--enablerepo=rhel-9-for-x86_64-baseos-eus-rpms --enablerepo=rhel-9-for-x86_64-appstream-eus-rpms --enablerepo=rhel-9-for-x86_64-highavailability-eus-rpms --enablerepo=openstack-17.1-for-rhel-9-x86_64-rpms --enablerepo=fast-datapath-for-rhel-9-x86_64-rpms\"", "CephLeappRepoInitCommand: subscription-manager release --set=8.6", "LeappInitCommand: | subscription-manager repos --disable=* subscription-manager release --unset subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms leapp answer --add --section check_vdo.confirm=True dnf -y remove irb", "LeappInitCommand: | dnf -y remove irb CephStorageLeappInitCommand: | dnf -y remove irb fence-agents-common", "parameter_defaults: NICsPrefixesToUdev: ['en']", "openstack undercloud upgrade --yes --system-upgrade /home/stack/system_upgrade.yaml", "sudo reboot" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/framework_for_upgrades_16.2_to_17.1/upgrading-the-undercloud-operating-system
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. Important Disclaimer : Links contained in this document to external websites are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
null
https://docs.redhat.com/en/documentation/ansible_on_clouds/2.x/html/red_hat_ansible_automation_platform_on_microsoft_azure_guide/providing-feedback
Validation and troubleshooting
Validation and troubleshooting OpenShift Container Platform 4.17 Validating and troubleshooting an OpenShift Container Platform installation Red Hat OpenShift Documentation Team
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.30.3 control-plane-1.example.com Ready master 41m v1.30.3 control-plane-2.example.com Ready master 45m v1.30.3 compute-2.example.com Ready worker 38m v1.30.3 compute-3.example.com Ready worker 33m v1.30.3 control-plane-3.example.com Ready master 41m v1.30.3", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "journalctl -b -f -u bootkube.service", "for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done", "tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log", "journalctl -b -f -u kubelet.service -u crio.service", "sudo tail -f /var/log/containers/*", "oc adm node-logs --role=master -u kubelet", "oc adm node-logs --role=master --path=openshift-apiserver", "cat ~/<installation_directory>/.openshift_install.log 1", "./openshift-install create cluster --dir <installation_directory> --log-level debug 1", "./openshift-install destroy cluster --dir <installation_directory> 1", "rm -rf <installation_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/validation_and_troubleshooting/index
Chapter 74. Kubernetes Job
Chapter 74. Kubernetes Job Since Camel 2.23 Both producer and consumer are supported The Kubernetes Job component isone of the Kubernetes Components which provides a producer to execute kubernetes Job operations and a consumer to consume events related to Job objects. 74.1. Dependencies When using kubernetes-job with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 74.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 74.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 74.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 74.3. Component Options The Kubernetes Job component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 74.4. Endpoint Options The Kubernetes Job endpoint is configured using URI syntax: with the following path and query parameters: 74.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 74.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 74.5. Message Headers The Kubernetes Job component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesJobName (producer) Constant: KUBERNETES_JOB_NAME The Job name. String CamelKubernetesJobSpec (producer) Constant: KUBERNETES_JOB_SPEC The spec for a Job. JobSpec CamelKubernetesJobLabels (producer) Constant: KUBERNETES_JOB_LABELS The Job labels. Map 74.6. Supported producer operation listJob listJobByLabels getJob createJob updateJob deleteJob 74.7. Kubernetes Job Producer Examples listJob: this operation list the jobs on a kubernetes cluster. from("direct:list"). toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob"). to("mock:result"); This operation returns a List of Job from your cluster. listJobByLabels: this operation list the jobs by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, labels); } }); toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels"). to("mock:result"); This operation returns a List of Jobs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). createJob: This operation creates a job on a Kubernetes Cluster. Example (see Create Job example for more information) import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.inject.Inject; import org.apache.camel.Endpoint; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.cdi.Uri; import org.apache.camel.component.kubernetes.KubernetesConstants; import org.apache.camel.component.kubernetes.KubernetesOperations; import io.fabric8.kubernetes.api.model.Container; import io.fabric8.kubernetes.api.model.ObjectMeta; import io.fabric8.kubernetes.api.model.PodSpec; import io.fabric8.kubernetes.api.model.PodTemplateSpec; import io.fabric8.kubernetes.api.model.batch.JobSpec; public class KubernetesCreateJob extends RouteBuilder { @Inject @Uri("timer:foo?delay=1000&repeatCount=1") private Endpoint inputEndpoint; @Inject @Uri("log:output") private Endpoint resultEndpoint; @Override public void configure() { // you can configure the route rule with Java DSL here from(inputEndpoint) .routeId("kubernetes-jobcreate-client") .process(exchange -> { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_NAME, "camel-job"); //DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); Map<String, String> joblabels = new HashMap<String, String>(); joblabels.put("jobLabelKey1", "value1"); joblabels.put("jobLabelKey2", "value2"); joblabels.put("app", "jobFromCamelApp"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, joblabels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_SPEC, generateJobSpec()); }) .toF("kubernetes-job:///{{kubernetes-master-url}}?oauthToken={{kubernetes-oauth-token:}}&operation=" + KubernetesOperations.CREATE_JOB_OPERATION) .log("Job created:") .process(exchange -> { System.out.println(exchange.getIn().getBody()); }) .to(resultEndpoint); } private JobSpec generateJobSpec() { JobSpec js = new JobSpec(); PodTemplateSpec pts = new PodTemplateSpec(); PodSpec ps = new PodSpec(); ps.setRestartPolicy("Never"); ps.setContainers(generateContainers()); pts.setSpec(ps); ObjectMeta metadata = new ObjectMeta(); Map<String, String> annotations = new HashMap<String, String>(); annotations.put("jobMetadataAnnotation1", "random value"); metadata.setAnnotations(annotations); Map<String, String> podlabels = new HashMap<String, String>(); podlabels.put("podLabelKey1", "value1"); podlabels.put("podLabelKey2", "value2"); podlabels.put("app", "podFromCamelApp"); metadata.setLabels(podlabels); pts.setMetadata(metadata); js.setTemplate(pts); return js; } private List<Container> generateContainers() { Container container = new Container(); container.setName("pi"); container.setImage("perl"); List<String> command = new ArrayList<String>(); command.add("echo"); command.add("Job created from Apache Camel code at " + (new Date())); container.setCommand(command); List<Container> containers = new ArrayList<Container>(); containers.add(container); return containers; } } 74.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-job:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, labels); } }); toF(\"kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels\"). to(\"mock:result\");", "import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.inject.Inject; import org.apache.camel.Endpoint; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.cdi.Uri; import org.apache.camel.component.kubernetes.KubernetesConstants; import org.apache.camel.component.kubernetes.KubernetesOperations; import io.fabric8.kubernetes.api.model.Container; import io.fabric8.kubernetes.api.model.ObjectMeta; import io.fabric8.kubernetes.api.model.PodSpec; import io.fabric8.kubernetes.api.model.PodTemplateSpec; import io.fabric8.kubernetes.api.model.batch.JobSpec; public class KubernetesCreateJob extends RouteBuilder { @Inject @Uri(\"timer:foo?delay=1000&repeatCount=1\") private Endpoint inputEndpoint; @Inject @Uri(\"log:output\") private Endpoint resultEndpoint; @Override public void configure() { // you can configure the route rule with Java DSL here from(inputEndpoint) .routeId(\"kubernetes-jobcreate-client\") .process(exchange -> { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_NAME, \"camel-job\"); //DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"default\"); Map<String, String> joblabels = new HashMap<String, String>(); joblabels.put(\"jobLabelKey1\", \"value1\"); joblabels.put(\"jobLabelKey2\", \"value2\"); joblabels.put(\"app\", \"jobFromCamelApp\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, joblabels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_SPEC, generateJobSpec()); }) .toF(\"kubernetes-job:///{{kubernetes-master-url}}?oauthToken={{kubernetes-oauth-token:}}&operation=\" + KubernetesOperations.CREATE_JOB_OPERATION) .log(\"Job created:\") .process(exchange -> { System.out.println(exchange.getIn().getBody()); }) .to(resultEndpoint); } private JobSpec generateJobSpec() { JobSpec js = new JobSpec(); PodTemplateSpec pts = new PodTemplateSpec(); PodSpec ps = new PodSpec(); ps.setRestartPolicy(\"Never\"); ps.setContainers(generateContainers()); pts.setSpec(ps); ObjectMeta metadata = new ObjectMeta(); Map<String, String> annotations = new HashMap<String, String>(); annotations.put(\"jobMetadataAnnotation1\", \"random value\"); metadata.setAnnotations(annotations); Map<String, String> podlabels = new HashMap<String, String>(); podlabels.put(\"podLabelKey1\", \"value1\"); podlabels.put(\"podLabelKey2\", \"value2\"); podlabels.put(\"app\", \"podFromCamelApp\"); metadata.setLabels(podlabels); pts.setMetadata(metadata); js.setTemplate(pts); return js; } private List<Container> generateContainers() { Container container = new Container(); container.setName(\"pi\"); container.setImage(\"perl\"); List<String> command = new ArrayList<String>(); command.add(\"echo\"); command.add(\"Job created from Apache Camel code at \" + (new Date())); container.setCommand(command); List<Container> containers = new ArrayList<Container>(); containers.add(container); return containers; } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-job-component-starter
Chapter 13. Fixed issues in Red Hat Process Automation Manager 7.13.3
Chapter 13. Fixed issues in Red Hat Process Automation Manager 7.13.3 Red Hat Process Automation Manager 7.13.3 provides increased stability and fixed issues listed in this section. 13.1. Business Central Dashbuilder does not support the type MILLISECOND [ RHPAM-4659 ] An exception occurs while using expressions with the USD character in BRL condition in GDST [ RHDM-1938 ] 13.2. Process engine IntermediateThrowingSignal node from subprocess and the subprocess is not getting marked as executed. [ RHPAM-4653 ] With jbpm-kie-services and Servicesorm.xml the incorrect version of orm is used [ RHPAM-4649 ] Error code: 404 on History button for Process Variable of type: org.jbpm.document.DocumentCollection [ RHPAM-4648 ] Unable to abort process instances that encounter the issue reported in RHPAM-4296 [ RHPAM-4625 ] Some events are missed in event emitters (elastic search) [ RHPAM-4584 ] Update Quarkus version in PIM to support Keystore and trustore passwords to be stored on vault [ RHPAM-4423 ] 13.3. Red Hat build of Kogito BPMN files contaning (Java) ServiceTask created using VSCode BPMN Editor causes parser errors in maven build [ RHPAM-4604 ] 13.4. DMN Designer [GSS](7.13.3) DMN extend rule to catch non-normalized named elements [ RHDM-1957 ] 13.5. Red Hat OpenShift Container Platform Red Hat Process Automation Manager Kogito Operator 7.x installation is failing with OOMKilled and CrashLoopBackOff [ RHPAM-4629 ] 13.6. Decision engine NullPointerException in mvel MathProcessor with equality check when null property is on right side [ RHPAM-4642 ] executable-model fails with BigDecimal arithmetic when it's a scope of a method call [ RHDM-1966 ] The str operator with bind variable fails after mvel jitting [ RHDM-1965 ] The executable model build fails when setting negative BigDecimal literal value [ RHDM-1959 ]
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/release_notes_for_red_hat_process_automation_manager_7.13/rn-7.13.3-fixed-issues-ref_release-notes
Chapter 11. Configuring NVMe over fabrics using NVMe/FC
Chapter 11. Configuring NVMe over fabrics using NVMe/FC The Non-volatile Memory ExpressTM (NVMeTM) over Fibre Channel (NVMeTM/FC) transport is fully supported in host mode when used with certain Broadcom Emulex and Marvell Qlogic Fibre Channel adapters. 11.1. Configuring the NVMe host for Broadcom adapters You can configure a Non-volatile Memory ExpressTM (NVMeTM) host for Broadcom adapters client by using the NVMe management command-line interface ( nvme-cli ) utility. Procedure Install the nvme-cli utility: This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host. Find the World Wide Node Name (WWNN) and World Wide Port Name (WWPN) identifiers of the local and remote ports: Using these host-traddr and traddr values, find the subsystem NVMe Qualified Name (NQN): Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr . Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host-traddr . Connect to the NVMe controller using the nvme-cli : Note If you see the keep-alive timer (5 seconds) expired! error when a connection time exceeds the default keep-alive timeout value, increase it using the -k option. For example, you can use, -k 7 . Here, Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr . Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host-traddr . Replace nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 with the subnqn . Replace 5 with the keep-alive timeout value in seconds. Verification List the NVMe devices that are currently connected: Additional resources nvme(1) man page on your system 11.2. Configuring the NVMe host for QLogic adapters You can configure a Non-volatile Memory ExpressTM (NVMeTM) host for Qlogic adapters client by using the NVMe management command-line interface ( nvme-cli ) utility. Procedure Install the nvme-cli utility: This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host. Reload the qla2xxx module: Find the World Wide Node Name (WWNN) and World Wide Port Name (WWPN) identifiers of the local and remote ports: Using these host-traddr and traddr values, find the subsystem NVMe Qualified Name (NQN): Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr . Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host-traddr . Connect to the NVMe controller using the nvme-cli tool: Note If you see the keep-alive timer (5 seconds) expired! error when a connection time exceeds the default keep-alive timeout value, increase it using the -k option. For example, you can use, -k 7 . Here, Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr . Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host-traddr . Replace nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 with the subnqn . Replace 5 with the keep-live timeout value in seconds. Verification List the NVMe devices that are currently connected: Additional resources nvme(1) man page on your system 11.3. steps Enabling multipathing on NVMe devices
[ "dnf install nvme-cli", "cat /sys/class/scsi_host/host*/nvme_info NVME Host Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x10000090fae0b5f5 WWNN x20000090fae0b5f5 DID x010f00 ONLINE NVME RPORT WWPN x204700a098cbcac6 WWNN x204600a098cbcac6 DID x01050e TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 000000000e Cmpl 000000000e Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000000008ea Issue 00000000000008ec OutIO 0000000000000002 abort 00000000 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000000 Err 00000000", "nvme discover --transport fc \\ --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 \\ --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 Discovery Log Number of Records 2, Generation counter 49530 =====Discovery Log Entry 0====== trtype: fc adrfam: fibre-channel subtype: nvme subsystem treq: not specified portid: 0 trsvcid: none subnqn: nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 traddr: nn-0x204600a098cbcac6:pn-0x204700a098cbcac6", "nvme connect --transport fc \\ --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 \\ --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 \\ -n nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 \\ -k 5", "nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B FFFFFFFF lsblk |grep nvme nvme0n1 259:0 0 100G 0 disk", "dnf install nvme-cli", "modprobe -r qla2xxx modprobe qla2xxx", "dmesg |grep traddr [ 6.139862] qla2xxx [0000:04:00.0]-ffff:0: register_localport: host-traddr=nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 on portID:10700 [ 6.241762] qla2xxx [0000:04:00.0]-2102:0: qla_nvme_register_remote: traddr=nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 PortID:01050d", "nvme discover --transport fc \\ --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 \\ --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 Discovery Log Number of Records 2, Generation counter 49530 =====Discovery Log Entry 0====== trtype: fc adrfam: fibre-channel subtype: nvme subsystem treq: not specified portid: 0 trsvcid: none subnqn: nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 traddr: nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6", "nvme connect --transport fc \\ --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 \\ --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 \\ -n nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 \\ -k 5", "nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B FFFFFFFF lsblk |grep nvme nvme0n1 259:0 0 100G 0 disk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/configuring-nvme-over-fabrics-using-nvme-fc_managing-storage-devices
Red Hat OpenShift Data Foundation architecture
Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation 4.9 Overview of OpenShift Data Foundation architecture and the roles that the components and services perform. Red Hat Storage Documentation Team [email protected] Abstract This document provides an overview of the OpenShift Data Foundation architecture.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/red_hat_openshift_data_foundation_architecture/index
Chapter 13. Tagging virtual devices
Chapter 13. Tagging virtual devices In Red Hat OpenStack Platform (RHOSP), if you attach multiple network interfaces or block devices to an instance, you can use device tagging to communicate the intended role of each device to the instance operating system. Tags are assigned to devices at instance boot time, and are available to the instance operating system through the metadata API and the configuration drive, if enabled. You can also tag virtual devices to a running instance. For more information, see the following procedures: Attaching a network to an instance Attaching a volume to an instance Procedure Create your instance with a virtual block device tag and a virtual network device tag: Replace <myNicTag> with the name of the tag for the virtual NIC device. You can add as many tagged virtual devices as you require. Replace <myVolumeTag> with the name of the tag for the virtual storage device. You can add as many tagged virtual devices as you require. Verify that the virtual device tags have been added to the instance metadata by using one of the following methods: Retrieve the device tag metadata from the metadata API by using GET /openstack/latest/meta_data.json . If the configuration drive is enabled and mounted under /configdrive on the instance operating system, view the /configdrive/openstack/latest/meta_data.json file. Example meta_data.json file:
[ "openstack server create --flavor m1.tiny --image cirros --network <network_UUID> --nic net-id=<network_UUID>,tag=<myNicTag> --block-device id=<volume_ID>,bus=virtio,tag=<myVolumeTag> myTaggedDevicesInstance", "{ \"devices\": [ { \"type\": \"nic\", \"bus\": \"pci\", \"address\": \"0030:00:02.0\", \"mac\": \"aa:00:00:00:01\", \"tags\": [\"myNicTag\"] }, { \"type\": \"disk\", \"bus\": \"pci\", \"address\": \"0030:00:07.0\", \"serial\": \"disk-vol-227\", \"tags\": [\"myVolumeTag\"] } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/tag-virt-devices_instances
Chapter 3. Installing a cluster quickly on Alibaba Cloud
Chapter 3. Installing a cluster quickly on Alibaba Cloud In OpenShift Container Platform version 4.12, you can install a cluster on Alibaba Cloud that uses the default configuration options. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have created the required Alibaba Cloud resources . If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 3.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.6. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 3.7. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set the USDRELEASE_IMAGE variable by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 USDRELEASE_IMAGE 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Note This command can take a few moments to run. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on Alibaba Cloud 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4 1 The Machine API Operator CR is required. 2 The Image Registry Operator CR is required. 3 The Ingress Operator CR is required. 4 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir> where: <name> is the name used to tag any cloud resources that are created for tracking. <alibaba_region> is the Alibaba Cloud region in which cloud resources will be created. <path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the files for the component CredentialsRequest objects. <path_to_ccoctl_output_dir> is the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output: openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 3.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 3.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service 3.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --credentials-requests --cloud=alibabacloud --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 USDRELEASE_IMAGE", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4", "ccoctl alibabacloud create-ram-users --name <name> --region=<alibaba_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --output-dir=<path_to_ccoctl_output_dir>", "2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml", "ls <path_to_ccoctl_output_dir>/manifests", "openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml", "cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_alibaba/installing-alibaba-default
Chapter 27. dns
Chapter 27. dns This chapter describes the commands under the dns command. 27.1. dns quota list List quotas Usage: Table 27.1. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id default: current project Table 27.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 27.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 27.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 27.2. dns quota reset Reset quotas Usage: Table 27.6. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id 27.3. dns quota set Set quotas Usage: Table 27.7. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id --api-export-size <api-export-size> New value for the api-export-size quota --recordset-records <recordset-records> New value for the recordset-records quota --zone-records <zone-records> New value for the zone-records quota --zone-recordsets <zone-recordsets> New value for the zone-recordsets quota --zones <zones> New value for the zones quota Table 27.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 27.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 27.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 27.4. dns service list List service statuses Usage: Table 27.12. Command arguments Value Summary -h, --help Show this help message and exit --hostname HOSTNAME Hostname --service_name SERVICE_NAME Service name --status STATUS Status --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 27.13. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 27.14. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 27.15. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 27.5. dns service show Show service status details Usage: Table 27.17. Positional arguments Value Summary id Service status id Table 27.18. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 27.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 27.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 27.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack dns quota list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]", "openstack dns quota reset [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]", "openstack dns quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID] [--api-export-size <api-export-size>] [--recordset-records <recordset-records>] [--zone-records <zone-records>] [--zone-recordsets <zone-recordsets>] [--zones <zones>]", "openstack dns service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--hostname HOSTNAME] [--service_name SERVICE_NAME] [--status STATUS] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]", "openstack dns service show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/dns
Chapter 23. Installing and configuring a PostgreSQL database server by using RHEL system roles
Chapter 23. Installing and configuring a PostgreSQL database server by using RHEL system roles You can use the postgresql RHEL system role to automate the installation and management of the PostgreSQL database server. By default, this role also optimizes PostgreSQL by automatically configuring performance-related settings in the PostgreSQL service configuration files. 23.1. Configuring PostgreSQL with an existing TLS certificate by using the postgresql RHEL system role If your application requires a PostgreSQL database server, you can configure this service with TLS encryption to enable secure communication between the application and the database. By using the postgresql RHEL system role, you can automate this process and remotely install and configure PostgreSQL with TLS encryption. In the playbook, you can use an existing private key and a TLS certificate that was issued by a certificate authority (CA). Note The postgresql role cannot open ports in the firewalld service. To allow remote access to the PostgreSQL server, add a task that uses the firewall RHEL system role to your playbook. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Both the private key of the managed node and the certificate are stored on the control node in the following files: Private key: ~/ <FQDN_of_the_managed_node> .key Certificate: ~/ <FQDN_of_the_managed_node> .crt Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create directory for TLS certificate and key ansible.builtin.file: path: /etc/postgresql/ state: directory mode: 755 - name: Copy CA certificate ansible.builtin.copy: src: "~/{{ inventory_hostname }}.crt" dest: "/etc/postgresql/server.crt" - name: Copy private key ansible.builtin.copy: src: "~/{{ inventory_hostname }}.key" dest: "/etc/postgresql/server.key" mode: 0600 - name: PostgreSQL with an existing private key and certificate ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: "16" postgresql_password: "{{ pwd }}" postgresql_ssl_enable: true postgresql_cert_name: "/etc/postgresql/server" postgresql_server_conf: listen_addresses: "'*'" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled The settings specified in the example playbook include the following: postgresql_version: <version> Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node. You cannot upgrade or downgrade PostgreSQL by changing the postgresql_version variable and running the playbook again. postgresql_password: <password> Sets the password of the postgres database superuser. You cannot change the password by changing the postgresql_password variable and running the playbook again. postgresql_cert_name: <private_key_and_certificate_file> Defines the path and base name of both the certificate and private key on the managed node without .crt and key suffixes. During the PostgreSQL configuration, the role creates symbolic links in the /var/lib/pgsql/data/ directory that refer to these files. The certificate and private key must exist locally on the managed node. You can use tasks with the ansible.builtin.copy module to transfer the files from the control node to the managed node, as shown in the playbook. postgresql_server_conf: <list_of_settings> Defines postgresql.conf settings the role should set. The role adds these settings to the /etc/postgresql/system-roles.conf file and includes this file at the end of /var/lib/pgsql/data/postgresql.conf . Consequently, settings from the postgresql_server_conf variable override settings in /var/lib/pgsql/data/postgresql.conf . Re-running the playbook with different settings in postgresql_server_conf overwrites the /etc/postgresql/system-roles.conf file with the new settings. postgresql_pg_hba_conf: <list_of_authentication_entries> Configures client authentication entries in the /var/lib/pgsql/data/pg_hba.conf file. For details, see see the PostgreSQL documentation. The example allows the following connections to PostgreSQL: Unencrypted connections by using local UNIX domain sockets. TLS-encrypted connections to the IPv4 and IPv6 localhost addresses. TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the listen_addresses setting in the postgresql_server_conf variable appropriately. Re-running the playbook with different settings in postgresql_pg_hba_conf overwrites the /var/lib/pgsql/data/pg_hba.conf file with the new settings. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Use the postgres super user to connect to a PostgreSQL server and execute the \conninfo meta command: If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled. Additional resources /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file /usr/share/doc/rhel-system-roles/postgresql/ directory Ansible vault 23.2. Configuring PostgreSQL with a TLS certificate issued from IdM by using the postgresql RHEL system role If your application requires a PostgreSQL database server, you can configure the PostgreSQL service with TLS encryption to enable secure communication between the application and the database. If the PostgreSQL host is a member of a Red Hat Enterprise Linux Identity Management (IdM) domain, the certmonger service can manage the certificate request and future renewals. By using the postgresql RHEL system role, you can automate this process. You can remotely install and configure PostgreSQL with TLS encryption, and the postgresql role uses the certificate RHEL system role to configure certmonger and request a certificate from IdM. Note The postgresql role cannot open ports in the firewalld service. To allow remote access to the PostgreSQL server, add a task to your playbook that uses the firewall RHEL system role. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You enrolled the managed node in an IdM domain. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: PostgreSQL with certificates issued by IdM ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: "16" postgresql_password: "{{ pwd }}" postgresql_ssl_enable: true postgresql_certificates: - name: postgresql_cert dns: "{{ inventory_hostname }}" ca: ipa principal: "postgresql/{{ inventory_hostname }}@EXAMPLE.COM" postgresql_server_conf: listen_addresses: "'*'" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled The settings specified in the example playbook include the following: postgresql_version: <version> Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node. You cannot upgrade or downgrade PostgreSQL by changing the postgresql_version variable and running the playbook again. postgresql_password: <password> Sets the password of the postgres database superuser. You cannot change the password by changing the postgresql_password variable and running the playbook again. postgresql_certificates: <certificate_role_settings> A list of YAML dictionaries with settings for the certificate role. postgresql_server_conf: <list_of_settings> Defines postgresql.conf settings you want the role to set. The role adds these settings to the /etc/postgresql/system-roles.conf file and includes this file at the end of /var/lib/pgsql/data/postgresql.conf . Consequently, settings from the postgresql_server_conf variable override settings in /var/lib/pgsql/data/postgresql.conf . Re-running the playbook with different settings in postgresql_server_conf overwrites the /etc/postgresql/system-roles.conf file with the new settings. postgresql_pg_hba_conf: <list_of_authentication_entries> Configures client authentication entries in the /var/lib/pgsql/data/pg_hba.conf file. For details, see see the PostgreSQL documentation. The example allows the following connections to PostgreSQL: Unencrypted connections by using local UNIX domain sockets. TLS-encrypted connections to the IPv4 and IPv6 localhost addresses. TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the listen_addresses setting in the postgresql_server_conf variable appropriately. Re-running the playbook with different settings in postgresql_pg_hba_conf overwrites the /var/lib/pgsql/data/pg_hba.conf file with the new settings. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Use the postgres super user to connect to a PostgreSQL server and execute the \conninfo meta command: If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled. Additional resources /usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file /usr/share/doc/rhel-system-roles/postgresql/ directory Ansible vault
[ "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create directory for TLS certificate and key ansible.builtin.file: path: /etc/postgresql/ state: directory mode: 755 - name: Copy CA certificate ansible.builtin.copy: src: \"~/{{ inventory_hostname }}.crt\" dest: \"/etc/postgresql/server.crt\" - name: Copy private key ansible.builtin.copy: src: \"~/{{ inventory_hostname }}.key\" dest: \"/etc/postgresql/server.key\" mode: 0600 - name: PostgreSQL with an existing private key and certificate ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: \"16\" postgresql_password: \"{{ pwd }}\" postgresql_ssl_enable: true postgresql_cert_name: \"/etc/postgresql/server\" postgresql_server_conf: listen_addresses: \"'*'\" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "psql \"postgresql://[email protected]:5432\" -c '\\conninfo' Password for user postgres: You are connected to database \"postgres\" as user \"postgres\" on host \"192.0.2.1\" at port \"5432\". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: PostgreSQL with certificates issued by IdM ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: \"16\" postgresql_password: \"{{ pwd }}\" postgresql_ssl_enable: true postgresql_certificates: - name: postgresql_cert dns: \"{{ inventory_hostname }}\" ca: ipa principal: \"postgresql/{{ inventory_hostname }}@EXAMPLE.COM\" postgresql_server_conf: listen_addresses: \"'*'\" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "psql \"postgresql://[email protected]:5432\" -c '\\conninfo' Password for user postgres: You are connected to database \"postgres\" as user \"postgres\" on host \"192.0.2.1\" at port \"5432\". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/installing-and-configuring-postgresql-by-using-the-postgresql-rhel-system-role_automating-system-administration-by-using-rhel-system-roles
8.71. glusterfs
8.71. glusterfs 8.71.1. RHBA-2014:1576 - glusterfs bug fix and enhancement update Updated glusterfs packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Storage is software-only, scale-out storage that provides flexible and affordable unstructured data storage for an enterprise. GlusterFS, a key building block of Red Hat Storage, is based on a stackable user-space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over network interconnections into one large, parallel network file system. This update also fixes the following bugs: Note The glusterfs packages have been upgraded to upstream version 3.6.0, which provides a number of bug fixes and enhancements over the version. (BZ# 1095604 ) This update also fixes the following bugs: Bug Fixes BZ# 1044797 When trying to mount a remote gluster share using the "backupvolfile-server" option, the process failed and returned an "Invalid argument" error message. A new mount point option, backup-volfile-servers, has been added to provide backward compatibility and allows remote gluster shares to be mounted successfully. BZ# 1119205 Previously, when updating the gluster utility from Red Hat Enterprise Linux 6.5 to version 6.6, glusterfs-libs POSTIN scriptlet terminated unexpectedly. The underlying source code has been patched, and POSTIN no longer fails. The glusterfs packages have been upgraded to upstream version 3.6.0, which provides a number of bug fixes and enhancements over the version. (BZ#1095604) Users of glusterfs are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/glusterfs
Chapter 2. Deploying HCI hardware
Chapter 2. Deploying HCI hardware This section contains procedures and information about the preparation and configuration of hyperconverged nodes. Prerequisites You have read Deploying an overcloud and Red Hat Ceph Storage in Deploying Red Hat Ceph and OpenStack together with director . 2.1. Cleaning Ceph Storage node disks Ceph Storage OSDs and journal partitions require factory clean disks. All data and metadata must be erased by the Bare Metal Provisioning service (ironic) from these disks before installing the Ceph OSD services. You can configure director to delete all disk data and metadata by default by using the Bare Metal Provisioning service. When director is configured to perform this task, the Bare Metal Provisioning service performs an additional step to boot the nodes each time a node is set to available . Warning The Bare Metal Provisioning service uses the wipefs --force --all command. This command deletes all data and metadata on the disk but it does not perform a secure erase. A secure erase takes much longer. Procedure Open /home/stack/undercloud.conf and add the following parameter: Save /home/stack/undercloud.conf . Update the undercloud configuration. 2.2. Registering nodes Register the nodes to enable communication with director. Procedure Create a node inventory JSON file in /home/stack . Enter hardware and power management details for each node. For example: Save the new file. Initialize the stack user: Import the JSON inventory file into director and register nodes Replace <inventory_file> with the name of the file created in the first step. Assign the kernel and ramdisk images to each node: 2.3. Verifying available Red Hat Ceph Storage packages Verify all required packages are available to avoid overcloud deployment failures. 2.3.1. Verifying cephadm package installation Verify the cephadm package is installed on at least one overcloud node. The cephadm package is used to bootstrap the first node of the Ceph Storage cluster. The cephadm package is included in the overcloud-hardened-uefi-full.qcow2 image. The tripleo_cephadm role uses the Ansible package module to ensure it is present in the image. 2.4. Deploying the software image for an HCI environment Nodes configured for an HCI environment must use the overcloud-hardened-uefi-full.qcow2 software image. Using this software image requires a Red Hat OpenStack Platform (RHOSP) subscription. Procedure Open your /home/stack/templates/overcloud-baremetal-deploy.yaml file. Add or update the image property for nodes that require the overcloud-hardened-uefi-full image. You can set the image to be used on specific nodes, or for all nodes that use a specific role: Specific nodes All nodes configured for a specific role In the roles_data.yaml role definition file, set the rhsm_enforce parameter to False . Run the provisioning command: Pass the overcloud-baremetal-deployed.yaml environment file to the openstack overcloud deploy command. 2.5. Designating nodes for HCI To designate nodes for HCI, you must create a new role file to configure the ComputeHCI role, and configure the bare metal nodes with a resource class for ComputeHCI . Procedure Log in to the undercloud as the stack user. Source the stackrc credentials file: Generate a new roles data file named roles_data.yaml that includes the Controller and ComputeHCI roles: Open roles_data.yaml and ensure that it has the following parameters and sections: Section/Parameter Value Role comment Role: ComputeHCI Role name name: ComputeHCI description HCI role HostnameFormatDefault %stackname%-novaceph-%index% deprecated_nic_config_name ceph.yaml Register the ComputeHCI nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . Inspect the node hardware: Tag each bare metal node that you want to designate for HCI with a custom HCI resource class: Replace <node> with the ID of the bare metal node. Add the ComputeHCI role to your /home/stack/templates/overcloud-baremetal-deploy.yaml file, and define any predictive node placements, resource classes, or other attributes that you want to assign to your nodes: Open the baremetal.yaml file and ensure that it contains the network configuration necessary for HCI. The following is an example configuration: Note Network configuration in the ComputeHCI role contains the storage_mgmt network. CephOSD nodes use this network to make redundant copies of data. The network configuration for the Compute role does not contain this network. See Configuring the Bare Metal Provisioning service for more information. Run the provisioning command: Monitor the provisioning progress in a separate terminal. Note The watch command renews every 2 seconds by default. The -n option sets the renewal timer to a different value. To stop the watch process, enter Ctrl-c . Verification: When provisioning is successful, the node state changes from available to active . Additional resources For more information about registering nodes, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. For more information about inspecting node hardware, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide. For more information on network configuration in the ComputeHCI role and the storage_mgmt network, see Configuring the Bare Metal Provisioning service . 2.6. Defining the root disk for multi-disk Ceph clusters Ceph Storage nodes typically use multiple disks. Director must identify the root disk in multiple disk configurations. The overcloud image is written to the root disk during the provisioning process. Hardware properties are used to identify the root disk. For more information about properties you can use to identify the root disk, see Properties that identify the root disk . Procedure Verify the disk information from the hardware introspection of each node: Replace <node_uuid> with the UUID of the node. Replace <output_file_name> with the name of the file that contains the output of the node introspection. For example, the data for one node might show three disks: Set the root disk for the node by using a unique hardware property: (undercloud)USD openstack baremetal node set --property root_device='{<property_value>}' <node-uuid> Replace <property_value> with the unique hardware property value from the introspection data to use to set the root disk. Replace <node_uuid> with the UUID of the node. Note A unique hardware property is any property from the hardware introspection step that uniquely identifies the disk. For example, the following command uses the disk serial number to set the root disk: (undercloud)USD openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 Configure the BIOS of each node to first boot from the network and then the root disk. Director identifies the specific disk to use as the root disk. When you run the openstack overcloud node provision command, director provisions and writes the overcloud image to the root disk. 2.6.1. Properties that identify the root disk There are several properties that you can define to help director identify the root disk: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1. Important Use the name property for devices with persistent names. Do not use the name property to set the root disk for devices that do not have persistent names because the value can change when the node boots.
[ "clean_nodes=true", "openstack undercloud install", "{ \"nodes\":[ { \"mac\":[ \"b1:b1:b1:b1:b1:b1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"b2:b2:b2:b2:b2:b2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"b3:b3:b3:b3:b3:b3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"c1:c1:c1:c1:c1:c1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" }, { \"mac\":[ \"c2:c2:c2:c2:c2:c2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.209\" }, { \"mac\":[ \"c3:c3:c3:c3:c3:c3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.210\" }, { \"mac\":[ \"d1:d1:d1:d1:d1:d1\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.211\" }, { \"mac\":[ \"d2:d2:d2:d2:d2:d2\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.212\" }, { \"mac\":[ \"d3:d3:d3:d3:d3:d3\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.213\" } ] }", "source ~/stackrc", "openstack overcloud node import <inventory_file>", "openstack overcloud node configure <node>", "- name: Ceph count: 3 instances: - hostname: overcloud-ceph-0 name: node00 image: href: file:///var/lib/ironic/images/overcloud-minimal.qcow2 - hostname: overcloud-ceph-1 name: node01 image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2 - hostname: overcloud-ceph-2 name: node02 image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2", "- name: ComputeHCI count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-hardened-uefi-full.qcow2 instances: - hostname: overcloud-ceph-0 name: node00 - hostname: overcloud-ceph-1 name: node01 - hostname: overcloud-ceph-2 name: node02", "rhsm_enforce: False", "(undercloud)USD openstack overcloud node provision --stack overcloud --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml", "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate Controller ComputeHCI -o ~/roles_data.yaml", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack baremetal node set --resource-class baremetal.HCI <node>", "- name: Controller count: 3 - name: ComputeHCI count: 1 defaults: resource_class: baremetal.HCI", "- name: ComputeHCI count: 3 hostname_format: compute-hci-%index% defaults: profile: ComputeHCI network_config: template: /home/stack/templates/three-nics-vlans/compute-hci.j2 networks: - network: ctlplane vif: true - network: external subnet: external_subnet - network: internalapi subnet: internal_api_subnet01 - network: storage subnet: storage_subnet01 - network: storage_mgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01", "(undercloud)USD openstack overcloud node provision --stack overcloud --network_config --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml", "(undercloud)USD watch openstack baremetal node list", "(undercloud)USD openstack baremetal introspection data save <node_uuid> --file <output_file_name>", "[ { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sda\", \"wwn_vendor_extension\": \"0x1ea4dcc412a9632b\", \"wwn_with_extension\": \"0x61866da04f3807001ea4dcc412a9632b\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380700\", \"serial\": \"61866da04f3807001ea4dcc412a9632b\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdb\", \"wwn_vendor_extension\": \"0x1ea4e13c12e36ad6\", \"wwn_with_extension\": \"0x61866da04f380d001ea4e13c12e36ad6\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380d00\", \"serial\": \"61866da04f380d001ea4e13c12e36ad6\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdc\", \"wwn_vendor_extension\": \"0x1ea4e31e121cfb45\", \"wwn_with_extension\": \"0x61866da04f37fc001ea4e31e121cfb45\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f37fc00\", \"serial\": \"61866da04f37fc001ea4e31e121cfb45\" } ]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_hyperconverged_infrastructure/assembly_deploying-hci-hardware_osp-hci
Install ROSA Classic clusters
Install ROSA Classic clusters Red Hat OpenShift Service on AWS 4 Installing, accessing, and deleting Red Hat OpenShift Service on AWS (ROSA) clusters. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_classic_clusters/index
Chapter 5. Getting Started with Virtual Machine Manager
Chapter 5. Getting Started with Virtual Machine Manager The Virtual Machine Manager, also known as virt-manager , is a graphical tool for creating and managing guest virtual machines. This chapter provides a description of the Virtual Machine Manager and how to run it. Note You can only run the Virtual Machine Manager on a system that has a graphical interface. For more detailed information about using the Virtual Machine Manager, see the other Red Hat Enterprise Linux virtualization guides . 5.1. Running Virtual Machine Manager To run the Virtual Machine Manager, select it in the list of applications or use the following command: The Virtual Machine Manager opens to the main window. Figure 5.1. The Virtual Machine Manager Note If running virt-manager fails, ensure that the virt-manager package is installed. For information on installing the virt-manager package, see Installing the Virtualization Packages in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide.
[ "virt-manager" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/chap-virtualization_manager-introduction
Chapter 19. Common administrative networking tasks
Chapter 19. Common administrative networking tasks Sometimes you might need to perform administration tasks on the Red Hat OpenStack Platform Networking service (neutron) such as configuring the Layer 2 Population driver or specifying the name assigned to ports by the internal DNS. 19.1. Configuring the L2 population driver The L2 Population driver is used in Networking service (neutron) ML2/OVS environments to enable broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast. Prerequisites You must have RHOSP administrator privileges. The Networking service must be using the ML2/OVS mechanism driver. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a custom YAML environment file. Example Your environment file must contain the keywords parameter_defaults . Under these keywords, add the following lines: Run the deployment command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Obtain the IDs for the OVS agents. Sample output Using an ID from one of the OVS agents, confirm that the L2 Population driver is set on the OVS agent. Example This example verifies the configuration of the L2 Population driver on the neutron-openvswitch-agent with ID 003a8750-a6f9-468b-9321-a6c03c77aec7 : Sample output Ensure that the ARP responder feature is enabled for the OVS agent. Example Sample output Additional resources OVN supported DHCP options Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide 19.2. Tuning keepalived to avoid VRRP packet loss If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs, the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages. To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the ha_vrrp_advert_int parameter in the ExtraConfig section for the Controller role. Procedure Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools. Example Create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file, increase the VRRP advertisement interval using the ha_vrrp_advert_int argument with a value specific for your site. (The default is 2 seconds.) You can also set values for gratuitous ARP messages: ha_vrrp_garp_master_repeat The number of gratuitous ARP messages to send at one time after the transition to the master state. (The default is 5 messages.) ha_vrrp_garp_master_delay The delay for second set of gratuitous ARP messages after the lower priority advert is received in the master state. (The default is 5 seconds.) Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Additional resources 2.1.2 Data Forwarding Rules, Subsection 2 in RFC 4541 Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide 19.3. Specifying the name that DNS assigns to ports You can specify the name assigned to ports by the internal DNS when you enable the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) DNS domain for ports extension ( dns_domain_ports ). You enable the DNS domain for ports extension by declaring the RHOSP Orchestration (heat) NeutronPluginExtensions parameter in a YAML-formatted environment file. Using a corresponding parameter, NeutronDnsDomain , you specify your domain name, which overrides the default value, openstacklocal . After redeploying your overcloud, you can use the OpenStack Client port commands, port set or port create , with --dns-name to assign a port name. Important You must enable the DNS domain for ports extension ( dns_domain_ports ) for DNS to internally resolve names for ports in your RHOSP environment. Using the NeutronDnsDomain default value, openstacklocal , means that the Networking service does not internally resolve port names for DNS. Also, when the DNS domain for ports extension is enabled, the Compute service automatically populates the dns_name attribute with the hostname attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname. Procedure Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools. Example Create a custom YAML environment file ( my-neutron-environment.yaml ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Tip The undercloud includes a set of Orchestration service templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core Orchestration service template collection. You can include as many environment files as necessary. In the environment file, add a parameter_defaults section. Under this section, add the DNS domain for ports extension, dns_domain_ports . Example Note If you set dns_domain_ports , ensure that the deployment does not also use dns_domain , the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously. Also in the parameter_defaults section, add your domain name ( example.com ) using the NeutronDnsDomain parameter. Example Run the openstack overcloud deploy command and include the core Orchestration templates, environment files, and this new environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Log in to the overcloud, and create a new port ( new_port ) on a network ( public ). Assign a DNS name ( my_port ) to the port. Example Display the details for your port ( new_port ). Example Output Under dns_assignment , the fully qualified domain name ( fqdn ) value for the port contains a concatenation of the DNS name ( my_port ) and the domain name ( example.com ) that you set earlier with NeutronDnsDomain . Create a new VM instance ( my_vm ) using the port ( new_port ) that you just created. Example Display the details for your port ( new_port ). Example Output Note that the Compute service changes the dns_name attribute from its original value ( my_port ) to the name of the instance with which the port is associated ( my_vm ). Additional resources Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide port in the Command Line Interface Reference server create in the Command Line Interface Reference 19.4. Assigning DHCP attributes to ports You can use Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extensions to add networking functions. You can use the extra DHCP option extension ( extra_dhcp_opt ) to configure ports of DHCP clients with DHCP attributes. For example, you can add a PXE boot option such as tftp-server , server-ip-address , or bootfile-name to a DHCP client port. The value of the extra_dhcp_opt attribute is an array of DHCP option objects, where each object contains an opt_name and an opt_value . IPv4 is the default version, but you can change this to IPv6 by including a third option, ip-version=6 . When a VM instance starts, the RHOSP Networking service supplies port information to the instance using DHCP protocol. If you add DHCP information to a port already connected to a running instance, the instance only uses the new DHCP port information when the instance is restarted. Some of the more common DHCP port attributes are: bootfile-name , dns-server , domain-name , mtu , server-ip-address , and tftp-server . For the complete set of acceptable values for opt_name , refer to the DHCP specification. Prerequisites You must have RHOSP administrator privileges. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a custom YAML environment file. Example Your environment file must contain the keywords parameter_defaults . Under these keywords, add the extra DHCP option extension, extra_dhcp_opt . Example Run the deployment command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Source your credentials file. Example Create a new port ( new_port ) on a network ( public ). Assign a valid attribute from the DHCP specification to the new port. Example Display the details for your port ( new_port ). Example Sample output Additional resources OVN supported DHCP options Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP) Parameters Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide port create in the Command Line Interface Reference port show in the Command Line Interface Reference 19.5. Loading kernel modules Some features in Red Hat OpenStack Platform (RHOSP) require certain kernel modules to be loaded. For example, the OVS firewall driver requires you to load the nf_conntrack_proto_gre kernel module to support GRE tunneling between two VM instances. By using a special Orchestration service (heat) parameter, ExtraKernelModules , you can ensure that heat stores configuration information about the required kernel modules needed for features like GRE tunneling. Later, during normal module management, these required kernel modules are loaded. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip Heat uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file under parameter_defaults , set ExtraKernelModules to the name of the module that you want to load. Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Verification If heat has properly loaded the module, you should see output when you run the lsmod command on the Compute node: Example Additional resources Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide 19.6. Configuring shared security groups When you want one or more Red Hat OpenStack Platform (RHOSP) projects to be able to share data, you can use the RHOSP Networking service (neutron) RBAC policy feature to share a security group. You create security groups and Networking service role-based access control (RBAC) policies using the OpenStack Client. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Prerequisites You have at least two RHOSP projects that you want to share. In one of the projects, the current project , you have created a security group that you want to share with another project, the target project . In this example, the ping_ssh security group is created: Example Procedure Log in to the overcloud for the current project that contains the security group. Obtain the name or ID of the target project. Obtain the name or ID of the security group that you want to share between RHOSP projects. Using the identifiers from the steps, create an RBAC policy using the openstack network rbac create command. In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e . The ID of the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24 : Example --target-project specifies the project that requires access to the security group. Tip You can share data between all projects by using the --target-all-projects argument instead of --target-project <target-project> . By default, only the admin user has this privilege. --action access_as_shared specifies what the project is allowed to do. --type indicates that the target object is a security group. 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24 is the ID of the particular security group which is being granted access to. The target project is able to access the security group when running the OpenStack Client security group commands, in addition to being able to bind to its ports. No other users (other than administrators and the owner) are able to access the security group. Tip To remove access for the target project, delete the RBAC policy that allows it using the openstack network rbac delete command. Additional resources Creating a security group in the Creating and Managing Instances guide security group create in the Command Line Interface Reference network rbac create in the Command Line Interface Reference
[ "source ~/stackrc", "vi /home/stack/templates/my-environment.yaml", "parameter_defaults: NeutronMechanismDrivers: ['openvswitch', 'l2population'] NeutronEnableL2Pop: 'True' NeutronEnableARPResponder: true", "openstack overcloud deploy --templates -e <your_environment_files> -e /home/stack/templates/my-environment.yaml", "openstack network agent list -c ID -c Binary", "+--------------------------------------+---------------------------+ | ID | Binary | +--------------------------------------+---------------------------+ | 003a8750-a6f9-468b-9321-a6c03c77aec7 | neutron-openvswitch-agent | | 02bbbb8c-4b6b-4ce7-8335-d1132df31437 | neutron-l3-agent | | 0950e233-60b2-48de-94f6-483fd0af16ea | neutron-openvswitch-agent | | 115c2b73-47f5-4262-bc66-8538d175029f | neutron-openvswitch-agent | | 2a9b2a15-e96d-468c-8dc9-18d7c2d3f4bb | neutron-metadata-agent | | 3e29d033-c80b-4253-aaa4-22520599d62e | neutron-dhcp-agent | | 3ede0b64-213d-4a0d-9ab3-04b5dfd16baa | neutron-dhcp-agent | | 462199be-0d0f-4bba-94da-603f1c9e0ec4 | neutron-sriov-nic-agent | | 54f7c535-78cc-464c-bdaa-6044608a08d7 | neutron-l3-agent | | 6657d8cf-566f-47f4-856c-75600bf04828 | neutron-metadata-agent | | 733c66f1-a032-4948-ba18-7d1188a58483 | neutron-l3-agent | | 7e0a0ce3-7ebb-4bb3-9b89-8cccf8cb716e | neutron-openvswitch-agent | | dfc36468-3a21-4a2d-84c3-2bc40f224235 | neutron-metadata-agent | | eb7d7c10-69a2-421e-bd9e-aec3edfe1b7c | neutron-openvswitch-agent | | ef5219b4-ee49-4635-ad04-048291209373 | neutron-sriov-nic-agent | | f36c7af0-e20c-400b-8a37-4ffc5d4da7bd | neutron-dhcp-agent | +--------------------------------------+---------------------------+", "openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep l2_population", "\"l2_population\": true,", "openstack network agent show 003a8750-a6f9-468b-9321-a6c03c77aec7 -c configuration -f json | grep arp_responder_enabled", "\"arp_responder_enabled\": true,", "source ~/stackrc", "vi /home/stack/templates/my-neutron-environment.yaml", "parameter_defaults: ControllerExtraConfig: neutron::agents::l3::ha_vrrp_advert_int: 7 neutron::config::l3_agent_config: DEFAULT/ha_vrrp_garp_master_repeat: value: 5 DEFAULT/ha_vrrp_garp_master_delay: value: 5", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml", "source ~/stackrc", "vi /home/stack/templates/my-neutron-environment.yaml", "parameter_defaults: NeutronPluginExtensions: \"qos,port_security,dns_domain_ports\"", "parameter_defaults: NeutronPluginExtensions: \"qos,port_security,dns_domain_ports\" NeutronDnsDomain: \"example.com\"", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml", "source ~/overcloudrc openstack port create --network public --dns-name my_port new_port", "openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port", "+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_port.example.com', | | | hostname='my_port', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_port | | name | new_port | +-------------------------+----------------------------------------------+", "openstack server create --image rhel --flavor m1.small --port new_port my_vm", "openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port", "+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_vm.example.com', | | | hostname='my_vm', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_vm | | name | new_port | +-------------------------+----------------------------------------------+", "source ~/stackrc", "vi /home/stack/templates/my-environment.yaml", "parameter_defaults: NeutronPluginExtensions: \"qos,port_security,extra_dhcp_opt\"", "openstack overcloud deploy --templates -e <your_environment_files> -e /home/stack/templates/my-environment.yaml", "source ~/overcloudrc", "openstack port create --extra-dhcp-option name=domain-name,value=test.domain --extra-dhcp-option name=ntp-server,value=192.0.2.123 --network public new_port", "openstack port show new_port -c extra_dhcp_opts", "+-----------------+--------------------------------------------------------------------+ | Field | Value | +-----------------+--------------------------------------------------------------------+ | extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' | | | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' | +-----------------+--------------------------------------------------------------------+", "vi /home/stack/templates/my-modules-environment.yaml", "ComputeParameters: ExtraKernelModules: nf_conntrack_proto_gre: {} ControllerParameters: ExtraKernelModules: nf_conntrack_proto_gre: {}", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml", "sudo lsmod | grep nf_conntrack_proto_gre", "openstack security group create ping_ssh", "openstack project list", "openstack security group list", "openstack network rbac create --target-project 32016615de5d43bb88de99e7f2e26a1e --action access_as_shared --type security_group 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/common-network-tasks_rhosp-network
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for the external Ceph storage system. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Creating an OpenShift Data Foundation Cluster for external Ceph storage system You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures. Prerequisites A valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure the OpenShift Container Platform version is 4.18 or above before deploying OpenShift Data Foundation 4.18. OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS versions in the External Mode tab. Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . It is recommended that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. Procedure Click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation and then click Create StorageSystem . In the Backing storage page, select the following options: Select Full deployment for the Deployment type option. Select Connect an external storage platform from the available options. Select Red Hat Ceph Storage for Storage platform . Click . In the Connection details page, provide the necessary information: Click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the admin key . Run the following command on the RHCS node to view the list of available arguments: Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment). Note Use the yum install cephadm command and then the cephadm command to deploy your RHCS cluster using containers. You must pull the RHCS container images using the cephadm command, rather than using yum for installing the Ceph packages onto nodes. For more information, see RHCS product documentation . To retrieve the external cluster details from the RHCS cluster, choose one of the following two options, either a configuration file or command-line flags. Configuration file Use config-file flag. This stores the parameters used during deployment. In new deployments, you can save the parameters used during deployment in a configuration file. This file can then be used during upgrade to preserve the parameters as well as adding any additional parameters. Use config-file to set the path to the configuration file. Example configuration file saved in /config.ini : Set the path to the config.ini file using config-file : Command-line flags Retrieve the external cluster details from the RHCS cluster, and pass the parameters for your deployment. For example: RBD parameters rbd-data-pool-name A mandatory parameter that is used for providing block storage in OpenShift Data Foundation. rados-namespace Divides an RBD data pool into separate logical namespaces, used for creating RBD PVC in a radosNamespace . Flags required with rados-namespace are restricted-auth-permission and k8s-cluster-name . rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. RGW parameters rgw-endpoint (Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> Note A fully-qualified domain name (FQDN) is also supported in the format <FQDN>:<PORT> . rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. To provide the TLS certificate and RGW endpoint details to the helper script, ceph-external-cluster-details-exporter.py , run the following command: This creates a resource to create a Ceph Object Store CR such as Kubernetes secret containing the TLS certificate. All the intermediate certificates including private keys need to be stored in the certificate file. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (not recommended). Monitoring parameters monitoring-endpoint (Optional) This parameter accepts comma-separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. monitoring-endpoint-port (Optional) It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. Ceph parameters ceph-conf (Optional) The name of the Ceph configuration file. run-as-user (Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index CephFS parameters cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. Output parameters dry-run (Optional) This parameter helps to print the executed commands without running them. output (Optional) The file where the output is required to be stored. Multicluster parameters k8s-cluster-name (Optional) Kubernetes cluster name. cluster-name (Optional) The Ceph cluster name. restricted-auth-permission (Optional) This parameter restricts cephCSIKeyrings auth permissions to specific pools and clusters. Mandatory flags that need to be set with this are rbd-data-pool-name and cluster-name . You can also pass the cephfs-filesystem-name flag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem. Note This parameter must be applied only for the new deployments. To restrict csi-users per pool and per cluster, you need to create new csi-users and new secrets for those csi-users . Example with restricted auth permission: Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version. Click Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Click The button is enabled only after you upload the .json file. In the Review and create page, review if all the details are correct: To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-external-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick. To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation for external Ceph storage system . 2.2.1. Applying encryption in-transit on Red Hat Ceph Storage cluster Procedure Apply Encryption in-transit settings. Check the settings. Restart all Ceph daemons. Wait for the restarting of all the daemons. 2.3. Verifying your OpenShift Data Foundation installation for external Ceph storage system Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.3.1. Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 2.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 2.3.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.3.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 2.3.4. Verifying that the storage classes are created and listed Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If an MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 2.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 2.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true . 2.3.7. Verifying the the creation of Ceph Object Store CRD Run the following command to verify if the Ceph Object Store CRD is created in the external Red Hat Ceph Storage cluster.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "python3 ceph-external-cluster-details-exporter.py --help", "[Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name>", "python3 ceph-external-cluster-details-exporter.py --config-file /config.ini", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs", "python3 ceph-external-clustergw-endpoint r-details-exporter.py --rbd-data-pool-name <rbd block pool name> --rgw-endpoint <ip_address>:<port> --rgw-tls-cert-path <file path containing cert>", "python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]", "python3 ceph-external-cluster-details-exporter.py --upgrade", "root@ceph-client ~]# ceph config set global ms_client_mode secure ceph config set global ms_cluster_mode secure ceph config set global ms_service_mode secure ceph config set global rbd_default_map_options ms_mode=secure", "ceph config dump | grep ms_ global basic ms_client_mode secure * global basic ms_cluster_mode secure * global basic ms_service_mode secure * global advanced rbd_default_map_options ms_mode=secure *", "ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.osd-0 osd-0 *:9093,9094 running (7h) 5m ago 7h 24.6M - 0.24.0 3d2ad4f34549 6ef813aed5ef ceph-exporter.osd-0 osd-0 running (7h) 5m ago 7h 17.7M - 18.2.0-192.el9cp 6e4e34f038b9 179301cc7840 ceph-exporter.osd-1 osd-1 running (7h) 5m ago 7h 17.8M - 18.2.0-192.el9cp 6e4e34f038b9 1084517c5d27 ceph-exporter.osd-2 osd-2 running (7h) 5m ago 7h 17.9M - 18.2.0-192.el9cp 6e4e34f038b9 c933e31dc7b7 ceph-exporter.osd-3 osd-3 running (7h) 5m ago 7h 17.7M - 18.2.0-192.el9cp 6e4e34f038b9 9981004a7169 crash.osd-0 osd-0 running (7h) 5m ago 7h 6895k - 18.2.0-192.el9cp 6e4e34f038b9 9276199810a6 crash.osd-1 osd-1 running (7h) 5m ago 7h 6895k - 18.2.0-192.el9cp 6e4e34f038b9 43aee09f1f00 crash.osd-2 osd-2 running (7h) 5m ago 7h 6903k - 18.2.0-192.el9cp 6e4e34f038b9 adba2172546d crash.osd-3 osd-3 running (7h) 5m ago 7h 6899k - 18.2.0-192.el9cp 6e4e34f038b9 3a788ea496f3 grafana.osd-0 osd-0 *:3000 running (7h) 5m ago 7h 65.5M - <unknown> f142b583a1b1 c299328455cc mds.fsvol001.osd-0.lpciqk osd-0 running (7h) 5m ago 7h 24.8M - 18.2.0-192.el9cp 6e4e34f038b9 8790381f177c mds.fsvol001.osd-2.wocnxz osd-2 running (7h) 5m ago 7h 32.1M - 18.2.0-192.el9cp 6e4e34f038b9 2c66e36e19fc mgr.osd-0.dtkyni osd-0 *:9283,8765,8443 running (7h) 5m ago 7h 535M - 18.2.0-192.el9cp 6e4e34f038b9 41f5bed2d18a mgr.osd-2.kqcxwu osd-2 *:8443,9283,8765 running (7h) 5m ago 7h 440M - 18.2.0-192.el9cp 6e4e34f038b9 d8413a809b1f mon.osd-1 osd-1 running (7h) 5m ago 7h 350M 2048M 18.2.0-192.el9cp 6e4e34f038b9 fb3b5c186e38 mon.osd-2 osd-2 running (7h) 5m ago 7h 363M 2048M 18.2.0-192.el9cp 6e4e34f038b9 f5314c164e89 mon.osd-3 osd-3 running (7h) 5m ago 7h 361M 2048M 18.2.0-192.el9cp 6e4e34f038b9 3522f972ed7d node-exporter.osd-0 osd-0 *:9100 running (7h) 5m ago 7h 25.1M - 1.4.0 508050f8c316 43845647bc06 node-exporter.osd-1 osd-1 *:9100 running (7h) 5m ago 7h 21.4M - 1.4.0 508050f8c316 e84c3e2206c9 node-exporter.osd-2 osd-2 *:9100 running (7h) 5m ago 7h 25.4M - 1.4.0 508050f8c316 071580052c80 node-exporter.osd-3 osd-3 *:9100 running (7h) 5m ago 7h 21.8M - 1.4.0 508050f8c316 317205f34647 osd.0 osd-2 running (7h) 5m ago 7h 525M 4096M 18.2.0-192.el9cp 6e4e34f038b9 5247dd9d7ac3 osd.1 osd-0 running (7h) 5m ago 7h 652M 4096M 18.2.0-192.el9cp 6e4e34f038b9 17c66fee9f13 osd.2 osd-3 running (7h) 5m ago 7h 801M 1435M 18.2.0-192.el9cp 6e4e34f038b9 39b272b56fbe osd.3 osd-1 running (7h) 5m ago 7h 538M 923M 18.2.0-192.el9cp 6e4e34f038b9 f595858a1ca3 osd.4 osd-0 running (7h) 5m ago 7h 532M 4096M 18.2.0-192.el9cp 6e4e34f038b9 c4f57cc9eda6 osd.5 osd-2 running (7h) 5m ago 7h 761M 4096M 18.2.0-192.el9cp 6e4e34f038b9 d80ba180c940 osd.6 osd-3 running (7h) 5m ago 7h 415M 1435M 18.2.0-192.el9cp 6e4e34f038b9 9ec319187e25 osd.7 osd-1 running (7h) 5m ago 7h 427M 923M 18.2.0-192.el9cp 6e4e34f038b9 816731470d87 prometheus.osd-0 osd-0 *:9095 running (7h) 5m ago 7h 84.0M - 2.39.1 716dd9df3cf3 29db12cb1a5a rgw.rgw.ssl.osd-1.smzpfj osd-1 *:80 running (7h) 5m ago 7h 110M - 18.2.0-192.el9cp 6e4e34f038b9 57faaff4e425", "oc get cephcluster -n openshift-storage NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL ocs-external-storagecluster-cephcluster 30m Connected Cluster connected successfully HEALTH_OK true", "oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 30m Ready true 2021-11-17T09:09:52Z 4.18.0", "oc get cephobjectstore -n openshift-storage\" NAME PHASE ENDPOINT SECUREENDPOINT AGE object-store1 Ready <http://IP/FQDN:port> 15m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_in_external_mode/deploy-openshift-data-foundation-using-red-hat-ceph-storage
5.6. Configuring a Failover Domain
5.6. Configuring a Failover Domain A failover domain is a named subset of cluster nodes that are eligible to run a cluster service in the event of a node failure. A failover domain can have the following characteristics: Unrestricted - Allows you to specify that a subset of members are preferred, but that a cluster service assigned to this domain can run on any available member. Restricted - Allows you to restrict the members that can run a particular cluster service. If none of the members in a restricted failover domain are available, the cluster service cannot be started (either manually or by the cluster software). Unordered - When a cluster service is assigned to an unordered failover domain, the member on which the cluster service runs is chosen from the available failover domain members with no priority ordering. Ordered - Allows you to specify a preference order among the members of a failover domain. The member at the top of the list is the most preferred, followed by the second member in the list, and so on. Note Changing a failover domain configuration has no effect on currently running services. Note Failover domains are not required for operation. By default, failover domains are unrestricted and unordered. In a cluster with several members, using a restricted failover domain can minimize the work to set up the cluster to run a cluster service (such as httpd ), which requires you to set up the configuration identically on all members that run the cluster service). Instead of setting up the entire cluster to run the cluster service, you must set up only the members in the restricted failover domain that you associate with the cluster service. Note To configure a preferred member, you can create an unrestricted failover domain comprising only one cluster member. Doing that causes a cluster service to run on that cluster member primarily (the preferred member), but allows the cluster service to fail over to any of the other members. The following sections describe adding a failover domain, removing a failover domain, and removing members from a failover domain: Section 5.6.1, "Adding a Failover Domain" Section 5.6.2, "Removing a Failover Domain" Section 5.6.3, "Removing a Member from a Failover Domain" 5.6.1. Adding a Failover Domain To add a failover domain, follow these steps: At the left frame of the the Cluster Configuration Tool , click Failover Domains . At the bottom of the right frame (labeled Properties ), click the Create a Failover Domain button. Clicking the Create a Failover Domain button causes the Add Failover Domain dialog box to be displayed. At the Add Failover Domain dialog box, specify a failover domain name at the Name for new Failover Domain text box and click OK . Clicking OK causes the Failover Domain Configuration dialog box to be displayed ( Figure 5.10, " Failover Domain Configuration : Configuring a Failover Domain" ). Note The name should be descriptive enough to distinguish its purpose relative to other names used in your cluster. Figure 5.10. Failover Domain Configuration : Configuring a Failover Domain Click the Available Cluster Nodes drop-down box and select the members for this failover domain. To restrict failover to members in this failover domain, click (check) the Restrict Failover To This Domains Members checkbox. (With Restrict Failover To This Domains Members checked, services assigned to this failover domain fail over only to nodes in this failover domain.) To prioritize the order in which the members in the failover domain assume control of a failed cluster service, follow these steps: Click (check) the Prioritized List checkbox ( Figure 5.11, " Failover Domain Configuration : Adjusting Priority" ). Clicking Prioritized List causes the Priority column to be displayed to the Member Node column. Figure 5.11. Failover Domain Configuration : Adjusting Priority For each node that requires a priority adjustment, click the node listed in the Member Node/Priority columns and adjust priority by clicking one of the Adjust Priority arrows. Priority is indicated by the position in the Member Node column and the value in the Priority column. The node priorities are listed highest to lowest, with the highest priority node at the top of the Member Node column (having the lowest Priority number). Click Close to create the domain. At the Cluster Configuration Tool , perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running: New cluster - If this is a new cluster, choose File => Save to save the changes to the cluster configuration. Running cluster - If this cluster is operational and running, and you want to propagate the change immediately, click the Send to Cluster button. Clicking Send to Cluster automatically saves the configuration change. If you do not want to propagate the change immediately, choose File => Save to save the changes to the cluster configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-failover-domain-CA
8.3. Kernel Same-page Merging (KSM)
8.3. Kernel Same-page Merging (KSM) Kernel same-page Merging (KSM), used by the KVM hypervisor, allows KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. The concept of shared memory is common in modern operating systems. For example, when a program is first started, it shares all of its memory with the parent program. When either the child or parent program tries to modify this memory, the kernel allocates a new memory region, copies the original contents and allows the program to modify this new region. This is known as copy on write. KSM is a Linux feature which uses this concept in reverse. KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest virtual machine, a new page is created for that guest. This is useful for virtualization with KVM. When a guest virtual machine is started, it only inherits the memory from the host qemu-kvm process. Once the guest is running, the contents of the guest operating system image can be shared when guests are running the same operating system or applications. KSM allows KVM to request that these identical guest memory regions be shared. KSM provides enhanced memory speed and utilization. With KSM, common process data is stored in cache or in main memory. This reduces cache misses for the KVM guests, which can improve performance for some applications and operating systems. Secondly, sharing memory reduces the overall memory usage of guests, which allows for higher densities and greater utilization of resources. Note In Red Hat Enterprise Linux 7, KSM is NUMA aware. This allows it to take NUMA locality into account while coalescing pages, thus preventing performance drops related to pages being moved to a remote node. Red Hat recommends avoiding cross-node memory merging when KSM is in use. If KSM is in use, change the /sys/kernel/mm/ksm/merge_across_nodes tunable to 0 to avoid merging pages across NUMA nodes. This can be done with the virsh node-memory-tune --shm-merge-across-nodes 0 command. Kernel memory accounting statistics can eventually contradict each other after large amounts of cross-node merging. As such, numad can become confused after the KSM daemon merges large amounts of memory. If your system has a large amount of free memory, you may achieve higher performance by turning off and disabling the KSM daemon. See Chapter 9, NUMA " for more information on NUMA. Important Ensure the swap size is sufficient for the committed RAM even without taking KSM into account. KSM reduces the RAM usage of identical or similar guests. Overcommitting guests with KSM without sufficient swap space may be possible, but is not recommended because guest virtual machine memory use can result in pages becoming unshared. Red Hat Enterprise Linux uses two separate methods for controlling KSM: The ksm service starts and stops the KSM kernel thread. The ksmtuned service controls and tunes the ksm service, dynamically managing same-page merging. ksmtuned starts the ksm service and stops the ksm service if memory sharing is not necessary. When new guests are created or destroyed, ksmtuned must be instructed with the retune parameter to run. Both of these services are controlled with the standard service management tools. Note KSM is off by default on Red Hat Enterprise Linux 6.7. 8.3.1. The KSM Service The ksm service is included in the qemu-kvm package. When the ksm service is not started, Kernel same-page merging (KSM) shares only 2000 pages. This default value provides limited memory-saving benefits. When the ksm service is started, KSM will share up to half of the host system's main memory. Start the ksm service to enable KSM to share more memory. The ksm service can be added to the default startup sequence. Make the ksm service persistent with the systemctl command. 8.3.2. The KSM Tuning Service The ksmtuned service fine-tunes the kernel same-page merging (KSM) configuration by looping and adjusting ksm . In addition, the ksmtuned service is notified by libvirt when a guest virtual machine is created or destroyed. The ksmtuned service has no options. The ksmtuned service can be tuned with the retune parameter, which instructs ksmtuned to run tuning functions manually. The /etc/ksmtuned.conf file is the configuration file for the ksmtuned service. The file output below is the default ksmtuned.conf file: Within the /etc/ksmtuned.conf file, npages sets how many pages ksm will scan before the ksmd daemon becomes inactive. This value will also be set in the /sys/kernel/mm/ksm/pages_to_scan file. The KSM_THRES_CONST value represents the amount of available memory used as a threshold to activate ksm . ksmd is activated if either of the following occurs: The amount of free memory drops below the threshold, set in KSM_THRES_CONST . The amount of committed memory plus the threshold, KSM_THRES_CONST , exceeds the total amount of memory. 8.3.3. KSM Variables and Monitoring Kernel same-page merging (KSM) stores monitoring data in the /sys/kernel/mm/ksm/ directory. Files in this directory are updated by the kernel and are an accurate record of KSM usage and statistics. The variables in the list below are also configurable variables in the /etc/ksmtuned.conf file, as noted above. Files in /sys/kernel/mm/ksm/ : full_scans Full scans run. merge_across_nodes Whether pages from different NUMA nodes can be merged. pages_shared Total pages shared. pages_sharing Pages currently shared. pages_to_scan Pages not scanned. pages_unshared Pages no longer shared. pages_volatile Number of volatile pages. run Whether the KSM process is running. sleep_millisecs Sleep milliseconds. These variables can be manually tuned using the virsh node-memory-tune command. For example, the following specifies the number of pages to scan before the shared memory service goes to sleep: KSM tuning activity is stored in the /var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings. 8.3.4. Deactivating KSM Kernel same-page merging (KSM) has a performance overhead which may be too large for certain environments or host systems. KSM may also introduce side channels that could be potentially used to leak information across guests. If this is a concern, KSM can be disabled on per-guest basis. KSM can be deactivated by stopping the ksmtuned and the ksm services. However, this action does not persist after restarting. To deactivate KSM, run the following in a terminal as root: Stopping the ksmtuned and the ksm deactivates KSM, but this action does not persist after restarting. Persistently deactivate KSM with the systemctl commands: When KSM is disabled, any memory pages that were shared prior to deactivating KSM are still shared. To delete all of the PageKSM in the system, use the following command: After this is performed, the khugepaged daemon can rebuild transparent hugepages on the KVM guest physical memory. Using # echo 0 >/sys/kernel/mm/ksm/run stops KSM, but does not unshare all the previously created KSM pages (this is the same as the # systemctl stop ksmtuned command).
[ "systemctl start ksm Starting ksm: [ OK ]", "systemctl enable ksm", "systemctl start ksmtuned Starting ksmtuned: [ OK ]", "Configuration file for ksmtuned. How long ksmtuned should sleep between tuning adjustments KSM_MONITOR_INTERVAL=60 Millisecond sleep between ksm scans for 16Gb server. Smaller servers sleep more, bigger sleep less. KSM_SLEEP_MSEC=10 KSM_NPAGES_BOOST - is added to the `npages` value, when `free memory` is less than `thres`. KSM_NPAGES_BOOST=300 KSM_NPAGES_DECAY - is the value given is subtracted to the `npages` value, when `free memory` is greater than `thres`. KSM_NPAGES_DECAY=-50 KSM_NPAGES_MIN - is the lower limit for the `npages` value. KSM_NPAGES_MIN=64 KSM_NPAGES_MAX - is the upper limit for the `npages` value. KSM_NPAGES_MAX=1250 KSM_THRES_COEF - is the RAM percentage to be calculated in parameter `thres`. KSM_THRES_COEF=20 KSM_THRES_CONST - If this is a low memory system, and the `thres` value is less than `KSM_THRES_CONST`, then reset `thres` value to `KSM_THRES_CONST` value. KSM_THRES_CONST=2048 uncomment the following to enable ksmtuned debug information LOGFILE=/var/log/ksmtuned DEBUG=1", "virsh node-memory-tune --shm-pages-to-scan number", "systemctl stop ksmtuned Stopping ksmtuned: [ OK ] systemctl stop ksm Stopping ksm: [ OK ]", "systemctl disable ksm systemctl disable ksmtuned", "echo 2 >/sys/kernel/mm/ksm/run" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-KSM
Chapter 3. Updating Red Hat build of OpenJDK container images
Chapter 3. Updating Red Hat build of OpenJDK container images To ensure that an Red Hat build of OpenJDK container with Java applications includes the latest security updates, rebuild the container. Procedure Pull the base Red Hat build of OpenJDK image. Deploy the Red Hat build of OpenJDK application. For more information, see Deploying Red Hat build of OpenJDK applications in containers . The Red Hat build of OpenJDK container with the Red Hat build of OpenJDK application is updated. Additional resources For more information, see Red Hat OpenJDK Container images .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/packaging_red_hat_build_of_openjdk_8_applications_in_containers/updating-openjdk8-container-images
Chapter 5. Fencing: Configuring STONITH
Chapter 5. Fencing: Configuring STONITH STONITH is an acronym for "Shoot The Other Node In The Head" and it protects your data from being corrupted by rogue nodes or concurrent access. Just because a node is unresponsive, this does not mean it is not accessing your data. The only way to be 100% sure that your data is safe, is to fence the node using STONITH so we can be certain that the node is truly offline, before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. For more complete general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster . 5.1. Available STONITH (Fencing) Agents Use the following command to view of list of all available STONITH agents. You specify a filter, then this command displays only the STONITH agents that match the filter.
[ "pcs stonith list [ filter ]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-fencing-HAAR
function::printk
function::printk Name function::printk - Send a message to the kernel trace buffer Synopsis Arguments level an integer for the severity level (0=KERN_EMERG ... 7=KERN_DEBUG) msg The formatted message string Description Print a line of text to the kernel dmesg/console with the given severity. An implicit end-of-line is added. This function may not be safely called from all kernel probe contexts, so is restricted to guru mode only.
[ "printk(level:long,msg:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-printk